Python Interface¶
The Python interface of VUnit is exposed through the VUnit
class
that can be imported directly. See the
User Guide for a quick introduction. The
following list provides detailed references of the Python API and
about how to set compilation and simulation options.
- vunit.ui
VUnit
VUnit.add_array_util()
VUnit.add_com()
VUnit.add_compile_option()
VUnit.add_external_library()
VUnit.add_json4vhdl()
VUnit.add_library()
VUnit.add_osvvm()
VUnit.add_preprocessor()
VUnit.add_random()
VUnit.add_source_file()
VUnit.add_source_files()
VUnit.add_source_files_from_csv()
VUnit.add_verification_components()
VUnit.add_verilog_builtins()
VUnit.add_vhdl_builtins()
VUnit.enable_check_preprocessing()
VUnit.enable_location_preprocessing()
VUnit.from_args()
VUnit.from_argv()
VUnit.get_compile_order()
VUnit.get_implementation_subset()
VUnit.get_libraries()
VUnit.get_simulator_name()
VUnit.get_source_file()
VUnit.get_source_files()
VUnit.library()
VUnit.main()
VUnit.set_attribute()
VUnit.set_compile_option()
VUnit.set_generic()
VUnit.set_parameter()
VUnit.set_sim_option()
VUnit.simulator_supports_coverage()
- LibraryList
- Library
Library
Library.add_compile_option()
Library.add_source_file()
Library.add_source_files()
Library.entity()
Library.get_source_file()
Library.get_source_files()
Library.get_test_benches()
Library.module()
Library.name
Library.set_compile_option()
Library.set_generic()
Library.set_parameter()
Library.set_sim_option()
Library.test_bench()
- SourceFileList
- SourceFile
- TestBench
TestBench
TestBench.add_config()
TestBench.get_tests()
TestBench.library
TestBench.name
TestBench.scan_tests_from_file()
TestBench.set_attribute()
TestBench.set_generic()
TestBench.set_parameter()
TestBench.set_post_check()
TestBench.set_pre_config()
TestBench.set_sim_option()
TestBench.set_vhdl_configuration_name()
TestBench.test()
- Test
- Results
- Preprocessor
- Compilation Options
- Simulation Options
Configurations¶
In VUnit Python API the name configuration
is used to denote the
user controllable configuration of one test run such as
generic/parameter settings, simulation options as well as the
pre_config and post_check callback functions.
User attributes can also be added as a part of a
configuration.
Configurations can either be unique for each test case or must be
common for the entire test bench depending on the situation. For test
benches without test such as tb_example in the User Guide the
configuration is common for the entire test bench. For test benches
containing tests such as tb_example_many the configuration is done
for each test case. If the run_all_in_same_sim
attribute has been used,
configuration is performed at the test bench level even if there are
individual tests within since they must run in the same simulation.
In a VUnit all test benches and test cases are created with an unnamed default
configuration which is modified by different methods such as set_generic
etc.
In addition to the unnamed default configuration multiple named configurations
can be derived from it by using the add_config
method. The default
configuration is only run if there are no named configurations.
Attributes¶
The user may set custom attributes on test cases via comments or via the
set_attribute
method. The attributes can for example be used to achieve
requirements trace-ability. The attributes are exported in the
JSON Export. All user defined attributes must start
with a dot (.
) as non-dot attributes are reserved for built-in
attributes.
Attributes set via the python interface will effectively overwrite the value of a user attribute set via code comments.
Attribute example¶
if run("Test 1") then
-- vunit: .requirement-117
end if;
`TEST_SUITE begin
`TEST_CASE("Test 1") begin
// vunit: .requirement-117
end
end
my_test.set_attribute(".requirement-117", None)
{
"attributes": {
".requirement-117": null
}
}
Pre and post simulation hooks¶
There are two hooks to run user defined Python code.
- pre_config:
A
pre_config
is called before simulation of the test case. The function accepts an optional string argumentoutput_path
, which is the filesystem path to the directory where test outputs are stored, and an optional string argumentsimulator_output_path
which is the path to the simulator working directory.Note
simulator_output_path
is shared by all test runs. The user must take care that test runs do not read or write the same files asynchronously. It is therefore recommended to useoutput_path
in favor ofsimulator_output_path
.A
pre_config
must returnTrue
or the test will fail immediately.The use case for
pre_config
is to automatically create input data files that are read by the test case during simulation. The test bench can access the test case uniqueoutput_path
via a special generic or a using theoutput_path
function. See the run library user guide.The use case for
simulator_output_path
is to support code expecting input files to be located in the simulator working directory.- post_check:
A
post_check
is called after a passing simulation of the test case. The function accepts an optional string argumentoutput_path
, which is the filesystem path to the directory where test outputs are stored, and an optional string argumentoutput
which is the full standard output from the test containing the simulator transcript.The function must return
True
or the test will fail.The use case is to automatically check output data files that are written by the test case during simulation. The test bench can access the test case unique
output_path
via a special generic or a using theoutput_path
function. See the run library user guide.Note
The
post_check
function is only called after a passing test and skipped in case of failure.
Hook example¶
class DataChecker: """ Provides input data to test bench and checks its output data """ def __init__(self, data): self.input_data = data def pre_config(self, output_path): write_data(self.input_data, join(output_path, "input.csv")) return True def post_check(self, output_path): expected = compute_expected(self.input_data) got = read_data(join(output_path, "output.csv")) return check_equal(got, expected) # Just change the original test checker = DataChecker(data=create_random_data(seed=11)) my_test.set_pre_config(checker.pre_config) my_test.set_post_check(checker.post_check) # .. or create many configurations of the test for seed in range(10, 20): checker = DataChecker(data=create_random_data(seed)) my_test.add_config(name="seed%i" % seed, pre_config=checker.pre_config, post_check=checker.post_check)
Adding Custom Command Line Arguments¶
It is possible to add custom command line arguments to your run.py
scripts using the VUnitCLI
class. A VUnitCLI
object
has a parser
field which is an ArgumentParser object of the
argparse library.
from vunit import VUnitCLI, VUnit
# Add custom command line argument to standard CLI
# Beware of conflicts with existing arguments
cli = VUnitCLI()
cli.parser.add_argument('--custom-arg', ...)
args = cli.parse_args()
# Create VUNit instance from custom arguments
vu = VUnit.from_args(args=args)
# Use args.custom_arg here ...
print(args.custom_arg)
- class vunit.vunit_cli.VUnitCLI(description=None)¶
VUnit command line interface
- parse_args(argv=None)¶
Parse command line arguments
- Parameters:
argv – Use explicit argv instead of actual command line argument
- Returns:
The parsed argument namespace object
- vunit.vunit_cli.nonnegative_int(val)¶
ArgumentParse non-negative int check