Command Line Interface#

A VUnit object can be created from command line arguments by using the from_argv method effectively creating a custom command line tool for running tests in the user project. Source files and libraries are added to the project by using methods on the VUnit object. The configuration is followed by a call to the main method which will execute the function specified by the command line arguments and exit the script. The added source files are automatically scanned for test cases.

Usage#

VUnit command line tool version 5.0.0-dev

usage: run.py [-h] [--with-attributes WITH_ATTRIBUTES]
              [--without-attributes WITHOUT_ATTRIBUTES] [-l] [-f] [--compile]
              [-m] [-k] [--fail-fast] [--elaborate] [--clean] [-o OUTPUT_PATH]
              [-x XUNIT_XML] [--xunit-xml-format {jenkins,bamboo}] [--exit-0]
              [--dont-catch-exceptions] [-v] [-q] [--no-color]
              [--log-level {info,error,warning,debug}] [-p NUM_THREADS] [-u]
              [--export-json EXPORT_JSON] [--version] [-g]
              [--gtkwave-fmt {vcd,fst,ghw}] [--gtkwave-args GTKWAVE_ARGS]
              [--cdslib CDSLIB] [--hdlvar HDLVAR]
              [tests ...]

Positional Arguments#

tests

Tests to run

Default: “*”

Named Arguments#

--with-attributes

Only select tests with these attributes set

--without-attributes

Only select tests without these attributes set

-l, --list

Only list all test cases

Default: False

-f, --files

Only list all files in compile order

Default: False

--compile

Only compile project without running tests

Default: False

-m, --minimal

Only compile files required for the (filtered) test benches

Default: False

-k, --keep-compiling

Continue compiling even after errors only skipping files that depend on failed files

Default: False

--fail-fast

Stop immediately on first failing test

Default: False

--elaborate

Only elaborate test benches without running

Default: False

--clean

Remove output path first. This is useful, for example, to force a complete recompile when compilation artifacts are obsolete due to a simulator version update.

Default: False

-o, --output-path

Output path for compilation and simulation artifacts

Default: “./vunit_out”

-x, --xunit-xml

Xunit test report .xml file

--xunit-xml-format

Possible choices: jenkins, bamboo

Only valid with –xunit-xml argument. Defines where in the XML file the simulator output is stored on a failure. “jenkins” = Output stored in <system-out>, “bamboo” = Output stored in <failure>.

Default: “jenkins”

--exit-0

Exit with code 0 even if a test failed. Still exits with code 1 on fatal errors such as compilation failure

Default: False

--dont-catch-exceptions

Let exceptions bubble up all the way. Useful when running with “python -m pdb”.

Default: False

-v, --verbose

Print test output immediately and not only when failure

Default: False

-q, --quiet

Do not print test output even in the case of failure

Default: False

--no-color

Do not color output

Default: False

--log-level

Possible choices: info, error, warning, debug

Log level of VUnit internal python logging. Used for debugging

Default: “warning”

-p, --num-threads

Number of tests to run in parallel. Test output is not continuously written in verbose mode with p > 1

Default: 1

-u, --unique-sim

Do not re-use the same simulator process for running different test cases (slower)

Default: False

--export-json

Export project information to a JSON file.

--version

show program’s version number and exit

-g, --gui

Open test case(s) in simulator gui with top level pre loaded

Default: False

ghdl#

GHDL specific flags

--gtkwave-fmt

Possible choices: vcd, fst, ghw

Save .vcd, .fst, or .ghw to open in gtkwave

--gtkwave-args

Arguments to pass to gtkwave

Default: “”

Incisive irun#

Incisive irun-specific flags

--cdslib

The cds.lib file to use. If not given, VUnit maintains its own cds.lib file.

--hdlvar

The hdl.var file to use. If not given, VUnit does not use a hdl.var file.

Example Session#

The VHDL User Guide Example can be run to produce the following output:

List all tests#
> python run.py -l
lib.tb_example.all
lib.tb_example_many.test_pass
lib.tb_example_many.test_fail
Listed 3 tests
Run all tests#
> python run.py -v lib.tb_example*
Running test: lib.tb_example.all
Running test: lib.tb_example_many.test_pass
Running test: lib.tb_example_many.test_fail
Running 3 tests

running lib.tb_example.all
Hello World!
pass( P=1 S=0 F=0 T=3) lib.tb_example.all (0.1 seconds)

running lib.tb_example.test_pass
This will pass
pass (P=2 S=0 F=0 T=3) lib.tb_example_many.test_pass (0.1 seconds)

running lib.tb_example.test_fail
Error: It fails
fail (P=2 S=0 F=1 T=3) lib.tb_example_many.test_fail (0.1 seconds)

==== Summary =========================================
pass lib.tb_example.all            (0.1 seconds)
pass lib.tb_example_many.test_pass (0.1 seconds)
fail lib.tb_example_many.test_fail (0.1 seconds)
======================================================
pass 2 of 3
fail 1 of 3
======================================================
Total time was 0.3 seconds
Elapsed time was 0.3 seconds
======================================================
Some failed!
Run a specific test#
> python run.py -v lib.tb_example.all
Running test: lib.tb_example.all
Running 1 tests

Starting lib.tb_example.all
Hello world!
pass (P=1 S=0 F=0 T=1) lib.tb_example.all (0.1 seconds)

==== Summary ==========================
pass lib.tb_example.all (0.9 seconds)
=======================================
pass 1 of 1
=======================================
Total time was 0.9 seconds
Elapsed time was 1.2 seconds
=======================================
All passed!

Opening a Test Case in Simulator GUI#

Sometimes the textual error messages and logs are not enough to pinpoint the error and a test case needs to be opened in the GUI for visual debugging using single stepping, breakpoints and wave form viewing. VUnit makes it easy to open a test case in the GUI by having a -g/--gui command line flag:

> python run.py --gui my_test_case &

This launches a simulator GUI window with the top level for the selected test case loaded and ready to run. Depending on the simulator a help text is printed were a few TCL functions are pre-defined:

# vunit_help
#   - Prints this help
# vunit_load [vsim_extra_args]
#   - Load design with correct generics for the test
#   - Optional first argument are passed as extra flags to vsim
# vunit_user_init
#   - Re-runs the user defined init file
# vunit_run
#   - Run test, must do vunit_load first
# vunit_compile
#   - Recompiles the source files
# vunit_restart
#   - Recompiles the source files
#   - and re-runs the simulation if the compile was successful

The test bench has already been loaded with the vunit_load command. Breakpoints can now be set and signals added to the log or to the waveform viewer manually by the user. The test case is then run using the vunit_run command. Recompilation can be performed without closing the GUI by running vunit_compile. It is also possible to perform run.py with the --compile flag in a separate terminal.

Test Output Paths#

VUnit creates a separate output directory for each test to provide isolation. The test output paths are located under OUTPUT_PATH/test_output/. The test names have been washed of any unsuitable characters and a hash has been added as a suffix to ensure uniqueness.

On Windows the paths can be shortened to avoid path length limitations. This behavior can be controlled by setting the relevant environment variables.

To get the exact test name to test output path mapping the file OUTPUT_PATH/test_output/test_name_to_path_mapping.txt can be used. Each line contains a test output path followed by a space seperator and then a test name.

Note

When using the run_all_in_same_sim pragma all tests within the test bench share the same output folder named after the test bench.

Environment Variables#

Simulator Selection#

VUnit automatically detects which simulators are available on the PATH environment variable and by default selects the first one found. For people who have multiple simulators installed the VUNIT_SIMULATOR environment variable can be set to one of activehdl, rivierapro, ghdl or modelsim to specify which simulator to use. modelsim is used for both ModelSim and Questa as VUnit handles these simulators identically.

In addition to VUnit scanning the PATH the simulator executable path can be explicitly configured by setting a VUNIT_<SIMULATOR_NAME>_PATH environment variable.

Explicitly set path to GHDL executables#
VUNIT_GHDL_PATH=/opt/ghdl/bin

Simulator Specific#

  • VUNIT_MODELSIM_INI By default VUnit copies the modelsim.ini file from the tool install folder as a starting point. Setting this environment variable selects another modelsim.ini file as the starting point allowing the user to customize it.

Test Output Path Length#

  • VUNIT_SHORT_TEST_OUTPUT_PATHS Unfortunately file system paths are still practically limited to 260 characters on Windows. VUnit tries to limit the length of the test output paths on Windows to avoid this limitation but still includes as much of the test name name as possible leaving a margin of 100 characters. VUnit however cannot forsee user specific test output file lengths and this environment variable can be set to minimize output path lengths on Windows. On other operating systems this limitation is not relevant.

  • VUNIT_TEST_OUTPUT_PATH_MARGIN Can be used to change the test output path margin on Windows. By default the test output path is shortened to allow a 100 character margin.

Language revision selection#

The VHDL revision can be specified through the Python Interface (see vunit.ui.VUnit). Alternatively, environment variable VUNIT_VHDL_STANDARD can be set to 93``|``1993, 02``|``2002, 08``|``2008 (default) or 19``|``2019.

Important

Note that VHDL revision 2019 is unsupported by most vendors, and support of VHDL 2008 features is uneven. Check the documentation of the simulator before using features requiring revisions equal or newer than 2008.

JSON Export#

VUnit supports exporting project information through the --export-json command line argument. A JSON file is written containing the list of all files added to the project as well as a list of all tests. Each test has a mapping to its source code location.

The feature can be used for IDE-integration where the IDE can know the path to all files, the library mapping of files and the source code location of all tests.

The JSON export file has three top level values:

  • export_format_version: The semantic version of the format

  • files: List of project files. Each file item has file_name and library_name.

  • tests: List of tests. Each test has attributes, location and name information. Attributes is the list of test attributes. The location contains the file name as well as the offset and length in characters of the symbol that defines the test. name is the name of the test.

Example JSON export file (file names are always absolute but the example has been simplified)#
{
    "export_format_version": {
        "major": 1,
        "minor": 0,
        "patch": 0
    },
    "files": [
        {
            "library_name": "lib",
            "file_name": "tb_example_many.vhd"
        },
        {
            "library_name": "lib",
            "file_name": "tb_example.vhd"
        }
    ],
    "tests": [
        {
            "attributes": {},
            "location": {
                "file_name": "tb_example_many.vhd",
                "length": 9,
                "offset": 556
            },
            "name": "lib.tb_example_many.test_pass"
        },
        {
            "attributes": {},
            "location": {
                "file_name": "tb_example_many.vhd",
                "length": 9,
                "offset": 624
            },
            "name": "lib.tb_example_many.test_fail"
        },
        {
            "attributes": {
                ".attr": null
            },
            "location": {
                "file_name": "tb_example.vhd",
                "length": 18,
                "offset": 465
            },
            "name": "lib.tb_example.all"
        }
    ]
}

Note

Several tests may map to the same source code location if the user created multiple configurations of the same basic tests.