Run Library User Guide

Introduction

The VHDL run library is a number of VHDL packages providing functionality for running a VUnit testbench. This functionality, also known as the VHDL test runner (VR), can be used standalone but the highly recommended approach, and the main focus of this user guide, is to use VR together with the Python-based test runner (PR). More information on the PR can be found in the PR user guide and the PR API documentation.

Minimal VUnit Testbench

A (close to) minimal VUnit testbench is outlined below.

library vunit_lib;
context vunit_lib.vunit_context;

entity tb_minimal is
  generic (runner_cfg : string);
end entity;

architecture tb of tb_minimal is
begin
  test_runner : process
  begin
    test_runner_setup(runner, runner_cfg);
    check_equal(to_string(17), "17");
    test_runner_cleanup(runner);
  end process;
end architecture;

The code has the following important properties:

  • The VUnit functionality is located in the vunit_lib library and is included with the library and context statements in the first two lines.

  • The runner_cfg generic is used to control the testbench from PR. If the VR is used standalone, you will need a default value, runner_cfg_default, for the generic. Note that the generic must be named runner_cfg for the testbench to be recognized by PR (there is an exception which we’ll get to later).

  • Every VUnit testbench has a main controlling process. It’s labelled test_runner in this example but the name is not important. The process starts by setting up VUnit using the test_runner_setup procedure with runner_cfg as the second parameter. The first runner parameter is a globally defined signal used to synchronize activities internal to VUnit. The process ends with a call to the test_runner_cleanup procedure which will perform some housekeeping activities and then force the simulation to a stop.

  • Since the call to test_runner_cleanup ends the simulation, you must execute your test code before that. You can either place the test code directly between test_runner_setup and test_runner_cleanup or you can use that region to trigger and wait for test activities performed externally (for example in a different process). Alternately, you can combine the two strategies. In this case, the test code is within the same process and use the check_equal procedure from the check library.

Running this testbench using PR will result in the following output (or similar):

> python run.py
Starting lib.tb_minimal.all
Output file: C:\github\vunit\docs\run\src\vunit_out\test_output\lib.tb_minimal.all_42aa262c7c96c708ab3f3960f033f2328c642136\output.txt
pass (P=1 S=0 F=0 T=1) lib.tb_minimal.all (0.5 seconds)

==== Summary ==============================
pass lib.tb_minimal.all (0.5 seconds)
===========================================
pass 1 of 1
===========================================
Total time was 0.5 seconds
Elapsed time was 0.5 seconds
===========================================
All passed!

Testbench with Test Cases

A VUnit testbench can be structured around a set of test cases called a test suite. Defining test cases is a way of emphasizing what functionality is being verified by the testbench.

test_runner : process
begin
  test_runner_setup(runner, runner_cfg);

  -- Put test suite setup code here. This code is common to the entire test suite
  -- and is executed *once* prior to all test cases.

  while test_suite loop

    -- Put test case setup code here. This code executed before *every* test case.

    if run("Test to_string for integer") then
      -- The test case code is placed in the corresponding (els)if branch.
      check_equal(to_string(17), "17");

    elsif run("Test to_string for boolean") then
      check_equal(to_string(true), "true");

    end if;

    -- Put test case cleanup code here. This code executed after *every* test case.

  end loop;

  -- Put test suite cleanup code here. This code is common to the entire test suite
  -- and is executed *once* after all test cases have been run.

  test_runner_cleanup(runner);
end process;

This testbench has two test cases, named Test to_string for integer and Test to_string for boolean. If a test case has been enabled by runner_cfg, the corresponding run function call will return true the first time it is called, and the test code in that (els)if branch is executed. All test code can be in that branch, as in the example, or the branch can be used to coordinate activities elsewhere in the testbench.

The test_suite function returns true, and keeps the loop running, until there are no more enabled test cases remaining to execute.

Note that registration of test cases is not necessary; PR will scan your testbenches for run function calls in order to find all test cases. Also note that the test case name must be a string literal or it won’t be found by PR.

In case VR is used standalone, the test cases are registered on-the-fly the first time the run function is called.

A VUnit testbench runs through several distinct phases during the course of a simulation. The first is the test_runner_setup phase implemented by the procedure with the same name, and the last is the test_runner_cleanup phase. In between, there are a number of setup/cleanup phases for the test suite and the test cases. The code for these phases, if any, is defined by the user and is placed as indicated by the comments in the example. These phases are typically used for configuration, resetting the DUT, reading/writing test data from/to file etc. Phases can also be used to synchronize testbench processes and avoid issues such as premature simulation exits where test_runner_cleanup ends the simulation before all processes have completed their tasks. For a more comprehensive description of phases, please refer to the VUnit phase blog for details.

Running this testbench gives the following output:

> python run.py
Starting lib.tb_with_test_cases.Test to_string for integer
Output file: C:\github\vunit\docs\run\src\vunit_out\test_output\lib.tb_with_test_cases.Test_to_string_for_integer_f5d39e15e865eddcda2b57f65dddc2428c994af4\output.txt
pass (P=1 S=0 F=0 T=2) lib.tb_with_test_cases.Test to_string for integer (0.6 seconds)

Starting lib.tb_with_test_cases.Test to_string for boolean
Output file: C:\github\vunit\docs\run\src\vunit_out\test_output\lib.tb_with_test_cases.Test_to_string_for_boolean_38c3f897030cff968430d763a9bbc23202de1a7b\output.txt
pass (P=2 S=0 F=0 T=2) lib.tb_with_test_cases.Test to_string for boolean (0.5 seconds)

==== Summary =============================================================
pass lib.tb_with_test_cases.Test to_string for integer (0.6 seconds)
pass lib.tb_with_test_cases.Test to_string for boolean (0.5 seconds)
==========================================================================
pass 2 of 2
==========================================================================
Total time was 1.1 seconds
Elapsed time was 1.1 seconds
==========================================================================
All passed!

Distributed Testbenches

Some testbenches with a more distributed control may have several processes which operations depend on the currently running test case. However, there can be only one call to the run("Name of test case") function. Otherwise, VUnit will think that you’ve several test cases with the same name, which is not allowed in the same testbench. To avoid this, you can use the running_test_case function, which will return the name of the running test case. Below is an example of how it can be used. The example also contains a few other VUnit features:

  • info is a procedure for logging information. For more details, please refer to the logging library user guide.

  • start_stimuli is a VUnit event used to synchronize the two processes. The test_runner process activates the event using notify and the stimuli_generator process waits until the event becomes active using is_active. For more details, please refer to the event user guide.

  • get_entry_key, lock, and unlock are subprograms used to prevent test_runner_cleanup from ending the simulation before the stimuli_generator process has completed. For more details, please refer to the VUnit phase blog.

architecture tb of tb_running_test_case is
  signal start_stimuli : event_t := new_event;
begin
  test_runner : process
  begin
    test_runner_setup(runner, runner_cfg);

    while test_suite loop
      if run("Test scenario A") or run("Test scenario B") then
        notify(start_stimuli);
      elsif run("Test something else") then
        info("Testing something else");
      end if;
    end loop;

    test_runner_cleanup(runner);
  end process;

  stimuli_generator: process is
    constant key : key_t := get_entry_key(test_runner_cleanup);
  begin
    wait until is_active(start_stimuli);
    lock(runner, key);

    if running_test_case = "Test scenario A" then
      info("Applying stimuli for scenario A");
    elsif running_test_case = "Test scenario B" then
      info("Applying stimuli for scenario B");
    end if;

    unlock(runner, key);
  end process;

end architecture;

The running_test_case function returns the name of currently running test case from the call to the run function of that test case until the next run function is called. Before the first call to run or after a call to run returning false, this function will return an empty string ("").

There’s also a similar function, active_test_case, which returns a test case name within all parts of the test_suite loop. However, this function is not supported when running the testbench standalone without PR.

In the examples described so far the main controlling process has been placed in the top-level entity. It’s also possible to move this to a lower-level entity. To do that the runner_cfg generic has to be passed down to that entity. However, the generic in that lower-level entity must not be called runner_cfg since PR considers every VHDL file with a runner_cfg generic a top-level testbench to simulate. So the testbench top-level can look like this

library vunit_lib;
context vunit_lib.vunit_context;

entity tb_with_lower_level_control is
  generic (runner_cfg : string);
end entity;

architecture tb of tb_with_lower_level_control is
begin

  test_control: entity work.test_control
    generic map (
      nested_runner_cfg => runner_cfg);

end architecture;

And the lower-level entity like this

library vunit_lib;
context vunit_lib.vunit_context;

entity test_control is
  generic (
    nested_runner_cfg : string);
end entity test_control;

architecture tb of test_control is
begin
  test_runner : process
  begin
    test_runner_setup(runner, nested_runner_cfg);

    while test_suite loop
      if run("Test something") then
        info("Testing something");
      elsif run("Test something else") then
        info("Testing something else");
      end if;
    end loop;

    test_runner_cleanup(runner);
  end process;
end architecture tb;

The default PR behaviour is to scan all VHDL files with an entity containing a runner_cfg generic for test cases to run. Now that that the lower-level entity uses another generic name you have to use the scan_tests_from_file method in your run script.

Controlling What Test Cases to Run

When working with VUnit you will eventually end up with many testbenches and test cases. For example

> python run.py --list
lib.tb_magic_paths.all
lib.tb_minimal.all
lib.tb_running_test_case.Test scenario A
lib.tb_running_test_case.Test scenario B
lib.tb_running_test_case.Test something else
lib.tb_run_all_in_same_sim.Test to_string for integer again
lib.tb_run_all_in_same_sim.Test to_string for boolean again
lib.tb_standalone.Test that fails on VUnit check procedure
lib.tb_standalone.Test to_string for boolean
lib.tb_stopping_failure.Test that fails on an assert
lib.tb_stopping_failure.Test that crashes on boundary problems
lib.tb_stopping_failure.Test that fails on VUnit check procedure
lib.tb_stop_level.Test that fails multiple times but doesn't stop
lib.tb_with_lower_level_control.Test something
lib.tb_with_lower_level_control.Test something else
lib.tb_with_test_cases.Test to_string for integer
lib.tb_with_test_cases.Test to_string for boolean
lib.tb_with_watchdog.Test that stalls
lib.tb_with_watchdog.Test to_string for boolean
lib.tb_with_watchdog.Test that needs longer timeout
Listed 20 tests

You can control what testbenches and test cases to run from the command line by listing their names and/or using patterns. For example

> python run.py *minimal.all *integer
Starting lib.tb_minimal.all
Output file: C:\github\vunit\docs\run\src\vunit_out\test_output\lib.tb_minimal.all_42aa262c7c96c708ab3f3960f033f2328c642136\output.txt
pass (P=1 S=0 F=0 T=2) lib.tb_minimal.all (0.5 seconds)

Starting lib.tb_with_test_cases.Test to_string for integer
Output file: C:\github\vunit\docs\run\src\vunit_out\test_output\lib.tb_with_test_cases.Test_to_string_for_integer_f5d39e15e865eddcda2b57f65dddc2428c994af4\output.txt
pass (P=2 S=0 F=0 T=2) lib.tb_with_test_cases.Test to_string for integer (0.5 seconds)

==== Summary =============================================================
pass lib.tb_minimal.all                                (0.5 seconds)
pass lib.tb_with_test_cases.Test to_string for integer (0.5 seconds)
==========================================================================
pass 2 of 2
==========================================================================
Total time was 1.1 seconds
Elapsed time was 1.1 seconds
==========================================================================
All passed!

PR will simulate matching testbenches and use runner_cfg to control what test cases to run. Be aware that your shell may try to match the pattern you specify first. For example, using *tb* as a pattern will match all previous test case names, but it will also match all testbench file names - resulting in the file names being passed to VUnit, and no matching tests being found. However, passing .*tb* will match all tests but no files.

Running Test Cases Independently

The test suite while loop presented earlier iterates over all enabled test cases but the default behaviour of VUnit is to run all test cases in separate simulations, only enabling one test case at a time. There are several good reasons for this

  • The pass/fail status of a test case is based on its own merits and is not a side effect of other test cases. This makes it easier to trust the information in the test report.

  • A failing test case, causing the simulation to stop, won’t prevent the other test cases in the testbench from running

  • You can save time by just running one of many slow test cases if that’s sufficient for a specific test run.

  • You can run test cases in parallel threads using the multicore capabilities of your computer. Below all three tests are run in parallel using the -p option. Note the 3x difference between the total simulation time and the elapsed time.

> python run.py -p3 *minimal.all *else
Starting lib.tb_minimal.all
Output file: C:\github\vunit\docs\run\src\vunit_out\test_output\lib.tb_minimal.all_42aa262c7c96c708ab3f3960f033f2328c642136\output.txt
Starting lib.tb_with_lower_level_control.Test something else
Output file: C:\github\vunit\docs\run\src\vunit_out\test_output\lib.tb_with_lower_level_control.Test_something_else_e47dc199cab8c612d9a0f46b8be7d141576fc970\output.txt
Starting lib.tb_running_test_case.Test something else
Output file: C:\github\vunit\docs\run\src\vunit_out\test_output\lib.tb_running_test_case.Test_something_else_27dcc1aa8d44993b6b2d0b0a017fa6001b4c2aa7\output.txt
pass (P=1 S=0 F=0 T=3) lib.tb_running_test_case.Test something else (0.6 seconds)

pass (P=2 S=0 F=0 T=3) lib.tb_minimal.all (0.6 seconds)

pass (P=3 S=0 F=0 T=3) lib.tb_with_lower_level_control.Test something else (0.6 seconds)

==== Summary ===============================================================
pass lib.tb_running_test_case.Test something else        (0.6 seconds)
pass lib.tb_minimal.all                                  (0.6 seconds)
pass lib.tb_with_lower_level_control.Test something else (0.6 seconds)
============================================================================
pass 3 of 3
============================================================================
Total time was 1.7 seconds
Elapsed time was 0.6 seconds
============================================================================
All passed!

Possible drawbacks to this approach are that test cases have to be independent and the overhead of starting a new simulation for each test case (this is typically less than one second per test case). If that is the case you can force all test cases of a testbench to be run in the same simulation. This is done by adding the run_all_in_same_sim attribute (-- vunit: run_all_in_same_sim) before the test suite.

library vunit_lib;
context vunit_lib.vunit_context;

entity tb_run_all_in_same_sim is
  generic(runner_cfg : string);
end entity;

architecture tb of tb_run_all_in_same_sim is
begin
  test_runner : process
  begin
    test_runner_setup(runner, runner_cfg);

    -- vunit: run_all_in_same_sim
    while test_suite loop
      if run("Test to_string for integer again") then
        check_equal(to_string(17), "17");
      elsif run("Test to_string for boolean again") then
        check_equal(to_string(true), "true");
      end if;
    end loop;

    test_runner_cleanup(runner);
  end process;
end architecture;

The run_all_in_same_sim attribute can also be set from the run script, see vunit.ui.testbench.TestBench.

Important

When setting run_all_in_same_sim from the run script, the setting must be identical for all configurations of the testbench.

The VUnit Watchdog

Sometimes your design has a bug causing a test case to stall indefinitely, maybe preventing a nightly test run from proceeding. To avoid this VUnit provides a watchdog which will timeout and fail a test case after a specified time.

architecture tb of tb_with_watchdog is
begin
  test_runner : process
  begin
    test_runner_setup(runner, runner_cfg);

    while test_suite loop
      if run("Test that stalls") then
        wait;
      elsif run("Test to_string for boolean") then
        check_equal(to_string(true), "true");
      elsif run("Test that needs longer timeout") then
        -- It is also possible to set/re-set the timeout
        -- When test cases need separate timeout settings
        set_timeout(runner, 2 ms);
        wait for 1 ms;
      end if;
    end loop;

    test_runner_cleanup(runner);
  end process;

  test_runner_watchdog(runner, 10 ms);
end architecture;

Note that the problem with the first test case doesn’t prevent the second from running.

> python run.py
Starting lib.tb_with_watchdog.Test that stalls
Output file: C:\github\vunit\docs\run\src\vunit_out\test_output\lib.tb_with_watchdog.Test_that_stalls_7f50c5908f9e9f9df5075e065f984eef1c2f7b2b\output.txt
  10000000000000 fs - runner               -   ERROR - Test runner timeout after 10000000000000 fs.
C:\github\vunit\vunit\vhdl\core\src\core_pkg.vhd:84:7:@10ms:(report failure): Stop simulation on log level error
C:\ghdl\bin\ghdl.exe:error: report failed
in process .tb_with_watchdog(tb).P0
  from: vunit_lib.logger_pkg.decrease_stop_count at logger_pkg-body.vhd:736
  from: vunit_lib.logger_pkg.decrease_stop_count at logger_pkg-body.vhd:731
  from: vunit_lib.logger_pkg.log at logger_pkg-body.vhd:910
  from: vunit_lib.logger_pkg.error at logger_pkg-body.vhd:969
  from: vunit_lib.run_pkg.test_runner_watchdog at run.vhd:308
  from: process lib.tb_with_watchdog(tb).P0 at tb_with_watchdog.vhd:37
C:\ghdl\bin\ghdl.exe:error: simulation failed
fail (P=0 S=0 F=1 T=3) lib.tb_with_watchdog.Test that stalls (0.5 seconds)

Starting lib.tb_with_watchdog.Test to_string for boolean
Output file: C:\github\vunit\docs\run\src\vunit_out\test_output\lib.tb_with_watchdog.Test_to_string_for_boolean_f167e524924d51144c5a6913c63e9fa5c6c7988c\output.txt
pass (P=1 S=0 F=1 T=3) lib.tb_with_watchdog.Test to_string for boolean (0.5 seconds)

Starting lib.tb_with_watchdog.Test that needs longer timeout
Output file: C:\github\vunit\docs\run\src\vunit_out\test_output\lib.tb_with_watchdog.Test_that_needs_longer_timeout_5494104827c61d0022a75acbab4c0c6de9e29643\output.txt
pass (P=2 S=0 F=1 T=3) lib.tb_with_watchdog.Test that needs longer timeout (0.5 seconds)

==== Summary ===============================================================
pass lib.tb_with_watchdog.Test to_string for boolean     (0.5 seconds)
pass lib.tb_with_watchdog.Test that needs longer timeout (0.5 seconds)
fail lib.tb_with_watchdog.Test that stalls               (0.5 seconds)
============================================================================
pass 2 of 3
fail 1 of 3
============================================================================
Total time was 1.6 seconds
Elapsed time was 1.6 seconds
============================================================================
Some failed!

What Makes a Test Fail?

Stopping Failures

Anything that stops the simulation before the test_runner_cleanup procedure is called will cause a failing test.

test_runner : process
  variable my_vector : integer_vector(1 to 17);
begin
  test_runner_setup(runner, runner_cfg);

  while test_suite loop
    if run("Test that fails on an assert") then
      assert false;
    elsif run("Test that crashes on boundary problems") then
      report to_string(my_vector(runner_cfg'length));
    elsif run("Test that fails on VUnit check procedure") then
      check_equal(17, 18);
    elsif run("Test that a warning passes") then
      assert false severity warning;
    end if;
  end loop;

  test_runner_cleanup(runner);
end process;

All but the last of these test cases will fail

> python run.py
Starting lib.tb_stopping_failure.Test that fails on an assert
Output file: C:\repos\vunit\docs\run\src\vunit_out\test_output\lib.tb_stopping_failure.Test_that_fails_on_an_assert_f53b930e2c7649bc33253af52f8ea89a9c05f07b\output.txt
C:\repos\vunit\docs\run\src\tb_stopping_failure.vhd:24:9:@0ms:(assertion error): Assertion violation
ghdl:error: assertion failed
ghdl:error: simulation failed
fail (P=0 S=0 F=1 T=4) lib.tb_stopping_failure.Test that fails on an assert (0.5 s)

(11:35:31) Starting lib.tb_stopping_failure.Test that crashes on boundary problems
Output file: C:\repos\vunit\docs\run\src\vunit_out\test_output\lib.tb_stopping_failure.Test_that_crashes_on_boundary_problems_b53105615efefaa16d0cf9ee1bad37b5d3369e95\output.txt
ghdl:error: index (314) out of bounds (1 to 17) at C:\repos\vunit\docs\run\src\tb_stopping_failure.vhd:26
ghdl:error: simulation failed
fail (P=0 S=0 F=2 T=4) lib.tb_stopping_failure.Test that crashes on boundary problems (0.5 s)

(11:35:31) Starting lib.tb_stopping_failure.Test that fails on VUnit check procedure
Output file: C:\repos\vunit\docs\run\src\vunit_out\test_output\lib.tb_stopping_failure.Test_that_fails_on_VUnit_check_procedure_717a6f8ff044e3d5fa7d7d3ec5a32971d74864dd\output.txt
               0 fs - check                -   ERROR - Equality check failed - Got 17. Expected 18.
C:\repos\vunit\vunit\vhdl\core\src\core_pkg.vhd:85:7:@0ms:(report failure): Stop simulation on log level error
ghdl:error: report failed
ghdl:error: simulation failed
fail (P=0 S=0 F=3 T=4) lib.tb_stopping_failure.Test that fails on VUnit check procedure (0.5 s)

(11:35:32) Starting lib.tb_stopping_failure.Test that a warning passes
Output file: C:\repos\vunit\docs\run\src\vunit_out\test_output\lib.tb_stopping_failure.Test_that_a_warning_passes_7db91f3b27aea5f89e74e39ea51ce6d61558674e\output.txt
pass (P=1 S=0 F=3 T=4) lib.tb_stopping_failure.Test that a warning passes (0.4 s)

==== Summary ============================================================================
pass lib.tb_stopping_failure.Test that a warning passes               (0.4 s)
fail lib.tb_stopping_failure.Test that fails on an assert             (0.5 s)
fail lib.tb_stopping_failure.Test that crashes on boundary problems   (0.5 s)
fail lib.tb_stopping_failure.Test that fails on VUnit check procedure (0.5 s)
=========================================================================================
pass 1 of 4
fail 3 of 4
=========================================================================================
Total time was 1.8 s
Elapsed time was 1.8 s
=========================================================================================
Some failed!

By setting the VUnit fail_on_warning attribute (-- vunit: fail_on_warning) before the test suite, the last test case will also fail.

test_runner : process
  variable my_vector : integer_vector(1 to 17);
begin
  test_runner_setup(runner, runner_cfg);

  -- vunit: fail_on_warning
  while test_suite loop
    if run("Test that fails on an assert") then
      assert false;
    elsif run("Test that crashes on boundary problems") then
      report to_string(my_vector(runner_cfg'length));
    elsif run("Test that fails on VUnit check procedure") then
      check_equal(17, 18);
    elsif run("Test that a warning passes") then
      assert false severity warning;
    end if;
  end loop;

  test_runner_cleanup(runner);
end process;

The fail_on_warning attribute can also be set from the run script, see vunit.ui.testbench.TestBench.

Important

When setting fail_on_warning from the run script, the setting must be identical for all configurations of the testbench.

Counting Errors with VUnit Logging/Check Libraries

If you use the VUnit check/logging library you can set the stop_level such that the simulation continues on an error. Such errors will be remembered and the test will fail despite reaching the test_runner_cleanup call.

By default test_runner_cleanup will fail if there were any error or failure log even if they where disabled. Disabled errors or failures can be allowed using the allow_disabled_errors or allow_disabled_failures flags. Warnings can also optionally cause failure by setting the fail_on_warning flag.

test_runner : process
begin
  test_runner_setup(runner, runner_cfg);
  set_stop_level(failure);

  while test_suite loop
    if run("Test that fails multiple times but doesn't stop") then
      check_equal(17, 18);
      check_equal(17, 19);
    end if;
  end loop;

  test_runner_cleanup(runner);
end process;
> python run.py
Starting lib.tb_stop_level.Test that fails multiple times but doesn't stop
Output file: C:\github\vunit\docs\run\src\vunit_out\test_output\lib.tb_stop_level.Test_that_fails_multiple_times_but_doesn't_stop_d08f48d859442d0bc71e2bcdd8b429119f7cc17c\output.txt
               0 fs - check                -   ERROR - Equality check failed - Got 17. Expected 18.
               0 fs - check                -   ERROR - Equality check failed - Got 17. Expected 19.
FAILURE - Logger check has 2 errors
C:\github\vunit\vunit\vhdl\core\src\core_pkg.vhd:84:7:@0ms:(report failure): Final log check failed
C:\ghdl\bin\ghdl.exe:error: report failed
in process .tb_stop_level(tb).test_runner
  from: vunit_lib.logger_pkg.final_log_check at logger_pkg-body.vhd:1249
  from: vunit_lib.run_pkg.test_runner_cleanup at run.vhd:114
  from: process lib.tb_stop_level(tb).test_runner at tb_stop_level.vhd:29
C:\ghdl\bin\ghdl.exe:error: simulation failed
fail (P=0 S=0 F=1 T=1) lib.tb_stop_level.Test that fails multiple times but doesn't stop (0.5 seconds)

==== Summary =============================================================================
fail lib.tb_stop_level.Test that fails multiple times but doesn't stop (0.5 seconds)
==========================================================================================
pass 0 of 1
fail 1 of 1
==========================================================================================
Total time was 0.5 seconds
Elapsed time was 0.5 seconds
==========================================================================================
Some failed!

Running A VUnit Testbench Standalone

A VUnit testbench can be run just like any other VHDL testbench without involving PR. This is not the recommended way of working but can be useful in an organization which has started to use, but not fully adopted, VUnit. If you simulate the testbench below without PR the runner_cfg generic will have the value runner_cfg_default which will cause all test cases to be run.

library vunit_lib;
context vunit_lib.vunit_context;

entity tb_standalone is
  generic (runner_cfg : string := runner_cfg_default);
end entity;

architecture tb of tb_standalone is
begin
  test_runner : process
  begin
    test_runner_setup(runner, runner_cfg);

    while test_suite loop
      if run("Test that fails on VUnit check procedure") then
        check_equal(17, 18);
      elsif run("Test to_string for boolean") then
        check_equal(to_string(true), "true");
      end if;
    end loop;

    info("===Summary===" & LF & to_string(get_checker_stat));

    test_runner_cleanup(runner);
  end process;
end architecture;

However, since PR hasn’t scanned the code for test cases VUnit doesn’t know how many they are. Instead it will iterate the while loop as long as there is a call to the run function with a test case name VUnit hasn’t seen before. The first iteration in the example above will run the Test that fails on VUnit check procedure test case and the second iteration will run Test to_string for boolean. Then there is a third iteration where no new test case is found. This will trigger VUnit to end the while loop.

The default level for a VUnit check like check_equal is error and the default behaviour is to stop the simulation on error when running with PR. When running standalone the default behaviour is to stop the simulation on the failure level such that the simulation has the ability to run through all test cases despite a failing check like in the example above.

Without PR there is a need to print the test result. VUnit provides the get_checker_stat function to get the internal error counters and a to_string function to convert the returned record to a string. The example uses that and VUnit logging capabilities to create a simple summary in the test suite cleanup phase.

It’s also useful to print the currently running test case. VR has an internal logger, runner, providing such information. This information is suppressed when running with PR but is enabled in the standalone mode

#             0 ps - runner  -    INFO  - Test case: Test that fails on VUnit check procedure
#             0 ps - check   -    ERROR - Equality check failed - Got 17. Expected 18.
#             0 ps - runner  -    INFO  - Test case: Test to_string for boolean
#             0 ps - default -    INFO  - ===Summary===
#                                         checker_stat'(n_checks => 2, n_failed => 1, n_passed => 1)

Note that VUnit cannot handle VHDL asserts in this mode of operation. If the simulator has full support for VHDL-2019, it is possible to read out assert error counters and use check_equal to verify that these are zero before calling test_runner_cleanup. Failures like division by zero or out of range operations are other examples that won’t be handle gracefully in this mode, and not something that VHDL provides any solutions for.

Special Paths

When running with PR you can get the path to the directory containing the testbench and the path to the output directory of the current test by using the tb_path and output_path generics. This is described in more detail here. It’s also possible to access these path strings from the runner_cfg generic by using the tb_path and output_path functions.

Running the following testbench

library vunit_lib;
context vunit_lib.vunit_context;

entity tb_magic_paths is
  generic (runner_cfg : string);
end entity;

architecture tb of tb_magic_paths is
begin
  test_runner : process
  begin
    test_runner_setup(runner, runner_cfg);
    info("Directory containing testbench: " & tb_path(runner_cfg));
    info("Test output directory: " & output_path(runner_cfg));
    test_runner_cleanup(runner);
  end process;
end architecture;

will reveal that

> python run.py -v
Starting lib.tb_magic_paths.all
Output file: C:\github\vunit\docs\run\src\vunit_out\test_output\lib.tb_magic_paths.all_243b3c717ce1d4e82490245d1b7e8fe8797f5e94\output.txt
               0 fs - default              -    INFO - Directory containing testbench: C:/github/vunit/docs/run/src/
               0 fs - default              -    INFO - Test output directory: C:/github/vunit/docs/run/src/vunit_out/test_output/lib.tb_magic_paths.all_243b3c717ce1d4e82490245d1b7e8fe8797f5e94/
simulation stopped @0ms with status 0
pass (P=1 S=0 F=0 T=1) lib.tb_magic_paths.all (0.5 seconds)

==== Summary ==================================
pass lib.tb_magic_paths.all (0.5 seconds)
===============================================
pass 1 of 1
===============================================
Total time was 0.5 seconds
Elapsed time was 0.5 seconds
===============================================
All passed!

Note On Undocumented Features

VR contains a number of features not documented in this guide. These features are under evaluation and will be documented or removed when that evaluation has completed.