top of page

Writing Test benches

A testbench is a simulated environment created to test and verify a hardware design before it is implemented in real hardware. It acts like a virtual laboratory, where we can apply various input conditions to a circuit, observe its outputs, and check whether it behaves as expected.

​

The circuit we want to test is called the DUT (Design Under Test). It can be anything, such as a counter, an ALU, a register, or a communication interface like UART or FIFO.

​

The main purpose of a testbench is to verify that the RTL design works correctly under all conditions. During simulation, the testbench provides input signals like clock, reset, and data to the DUT. It then observes the DUT’s outputs, compares them with the expected results, and finally logs or displays the outcome on the simulator waveform.

​

A basic testbench typically consists of a few main parts. The Stimulus Generator produces inputs such as clock, reset, and data signals for the DUT. The DUT itself is the hardware module we are testing. A Monitor or Checker observes the DUT’s outputs and compares them with the expected behavior to detect errors. Finally, a Display or Logger records the simulation results, which we can view as waveforms or printed messages.

​

The testbench works in a simple flow. First, the clock and reset signals are generated to initialize the design. Then, input stimuli are applied to the DUT. The DUT processes these inputs according to its logic and produces outputs. The monitor checks these outputs against the expected results. If there is any difference, the testbench reports an error. Once all tests are complete, the simulation ends, and the final results are shown on the waveform viewer or simulation log.

​

In short, a testbench is the most essential part of verification. It helps ensure that a hardware design performs its intended function correctly before moving to synthesis or implementation.

WT1.png

Methods for Verification with Test Benches:
Simple Method(Manual Approach)

In this example, we are testing a 4-bit up/down counter manually. A manual testbench is the simplest way to test a design because we directly control all the input signals inside one process and observe the output in the waveform window.

​

The counter has inputs for clock, reset, enable, load, and up_down control. The output is a 4-bit value that either increases or decreases depending on the up_down signal. In the testbench, we generate a clock that toggles every 5 nanoseconds, giving a total period of 10 nanoseconds. Inside the process, we apply different test conditions step by step.

​

First, we apply a reset by setting rst to '0'. Since the reset is active low, this clears the counter to 0000. After a short delay, we release the reset by setting rst back to '1'. Next, we load the value 0101 (which is 5 in decimal) into the counter by setting the load signal high for one clock cycle.

​

After loading, we enable the counter and set the up_down signal to '1' to count upwards. With each rising edge of the clock, the counter increases by one. You can see the sequence changing like 0101, 0110, 0111, 1000, and so on. After a few cycles, we change up_down to '0' so that the counter begins counting down. The values now decrease with each clock cycle, for example, 1000, 0111, 0110, and so on.

​

Finally, we disable the counter by setting enable to '0'. Once disabled, the counter stops updating and holds its last value until the end of the simulation.

​

In this manual testbench, everything is written explicitly. The signals are driven manually without using any procedure or automation. It is very easy to understand and ideal for small designs or when you are learning how to verify a circuit for the first time. However, it is not reusable for larger systems because every change must be written by hand.

​

In the waveform, you will see the counter first reset to zero, then load the value five, count up for some time, then count down, and finally stop when the enable signal is turned off. This demonstrates the complete functionality of the counter in a simple and clear way.

​

DUT:

VHDL

Test Bench:

VHDL

2- Bus Functional Model Method:

In a BFM-based testbench, we don’t manually toggle every signal like in the manual testbench. Instead, we create small reusable procedures (or functions) that perform complete operations automatically. Each procedure represents a meaningful “transaction” — such as loading a value, counting up, or counting down. This makes the code cleaner, easier to read, and more realistic for complex verification environments.

​

Imagine that instead of pressing every button on the counter manually, you now have a remote control with buttons labeled RESET, LOAD, COUNT UP, and COUNT DOWN. Each button internally handles all the required signals in the right order. That’s exactly how a BFM works.

​

In our counter’s case, we can write a procedure for resetting, one for loading a value, one for counting up, and one for counting down. The testbench will only call these procedures, and each of them will internally drive the signals and timing needed to perform the operation.

​

Let’s say we create a procedure called apply_reset that sets rst low for a short time and then releases it. Another procedure called load_value can take an input value, assert the LOAD signal for one clock cycle, and then release it. Similarly, we can have count_up and count_down procedures that enable counting in the required direction for a few clock cycles.

The BFM-style testbench will then look very neat. Instead of long repetitive signal assignments, we will only have simple calls like this:

​

apply_reset;

load_value("0101");

count_up(5);

count_down(5);

 

Here, each line of code performs a complete sequence of signal activities internally. The actual signal-level work—like toggling load, enable, or up_down—is hidden inside the procedure, just like how a driver hides the complexity of signal timing from the main test scenario.

​

This approach is especially useful in larger designs where multiple operations are repeated many times. With procedures, you write the logic once and use it again whenever you need it. It not only saves time but also reduces human errors, improves readability, and makes the testbench scalable for future extensions.

​

When the simulation runs, the BFM testbench performs the same behavior as the manual one: it resets the counter, loads the value five, counts up, counts down, and stops. But now, the code looks cleaner and is much easier to modify. For example, if you wanted to load a different value or change the number of cycles, you just change one parameter in the procedure call instead of rewriting multiple signal assignments.

​

So, in simple terms, a BFM testbench is a smarter version of the manual testbench. It focuses on what operation you want to perform, not how to perform it at the signal level. It’s like giving high-level commands while the procedures handle all the detailed low-level signal timing behind the scenes.

​

This method is the foundation of modern verification approaches like UVM, where transactions are used instead of raw signal driving. In short, the BFM makes your testbench cleaner, reusable, and closer to how real verification happens in the industry.

tb_counter_bfm.vhd

VHDL

This is a BFM-style testbench. In this testbench, you don’t have to write low-level waits and signal assignments again and again. Instead, you use simple high-level procedure calls like do_reset, do_load, and do_count to create your entire test scenario.
 

Each procedure in this testbench is clock-synchronized, which means it automatically waits for clock edges inside the procedure itself. You don’t have to worry about timing details — you just call the procedure, and it handles everything internally.
 

This code is written in VHDL-2008, so when compiling, make sure to use the vcom -2008 flag, and compile your counter.vhd file before compiling the testbench.
 

The monitor process in this testbench prints the counter value at every rising edge of the clock. You can remove it if you don’t need console output or change its severity level if you want different message types.
 

You can also extend these procedures to make your testbench more powerful — for example, you can add file-based inputs, random stimulus generation, or automatic result checking. This is exactly how professional verification environments are built in the industry using the BFM approach — clean, reusable, and easy to scale.

3-  UVM Approach:

​

This is called a mixed-language verification setup, where your DUT (Design Under Test) is written in VHDL, but the testbench is written in SystemVerilog using UVM for advanced verification features. The two languages work together inside the simulator.

​

To make this possible, your simulator must support both languages, such as Mentor Questa, Cadence Xcelium, or Synopsys VCS-MX. The process is simple: first you compile the VHDL files (the DUT), and then you compile the SystemVerilog files (the wrapper, interface, and UVM testbench).

​

The easiest and cleanest way to connect a VHDL design with a SystemVerilog testbench is by creating a SystemVerilog wrapper. This wrapper simply instantiates the VHDL entity and exposes its ports so that SystemVerilog can connect to them through an interface.

​

VHDL DUT – The 4-bit Up/Down Counter

​

This is the same VHDL counter design you already have:

VHDL

SystemVerilog Wrapper for the VHDL Counter

​

This wrapper lets the SystemVerilog UVM testbench talk to the VHDL counter.

VHDL

This module doesn’t add new logic — it only connects the SystemVerilog world to the VHDL entity.

SystemVerilog Interface

​​

The interface connects your UVM components (like driver, monitor) to the DUT.

VHDL

SystemVerilog Top Module

​​

The top module brings everything together — it instantiates the wrapper, connects the interface, generates the clock, and starts the UVM environment.

VHDL

How It Works

​

When the simulation starts, the top-level module generates a clock and connects the interface to the wrapper. The wrapper connects SystemVerilog signals to the VHDL DUT internally. The UVM testbench uses the interface to drive signals through the driver, and the driver uses the clocking block to make sure all signals change on the correct clock edge.

The simulation flow looks like this: We consider here tool is Questa-sim

  • VHDL counter compiles first (vcom counter.vhd)
     

  • SystemVerilog wrapper, interface, and UVM code compile next (vlog -sv ...)
     

  • The top testbench (tb_top.sv) runs, which automatically starts the UVM environment.
     

UVM then creates transactions and drives them to the DUT through the interface. The counter responds, and the monitor or scoreboard can check if the counter’s behavior matches the expected results.

This mixed-language setup is used in real verification environments because most industrial designs include both VHDL and Verilog modules. The wrapper method provides clean isolation between them, and the UVM testbench stays entirely in SystemVerilog.

In simple words, you can think of it like this — the VHDL counter is the actual hardware, the wrapper is the adapter that connects two different worlds, the interface is the wire connection, and the UVM environment is the brain that tests and checks whether the counter behaves correctly.

Complete UVM Environment for Counter Example:

​

As we know, the UVM testbench architecture is widely used in the industry today. The question often arises — can we use UVM with VHDL designs? The answer is yes. This approach is called Mixed-Language Simulation.

In mixed-language simulation, the design (DUT) is written in VHDL, while the testbench is written in SystemVerilog using UVM. This method is especially useful when the design has a large number of signals or complex interfaces that are easier to handle using UVM’s structured and reusable environment.

In the industry, this approach is becoming increasingly common because UVM was originally created for SystemVerilog, not VHDL. However, many companies still use VHDL for their RTL designs. To verify those designs efficiently without rewriting them in Verilog, engineers use mixed-language simulation.

All that’s needed is a simulator that supports both VHDL and SystemVerilog. Tools like Mentor Questa, Cadence Xcelium, or Synopsys VCS can compile and simulate both languages together in a single environment. This allows engineers to take advantage of UVM’s advanced verification features while keeping their existing VHDL designs unchanged.

In simple terms, mixed-language simulation provides the best of both worlds — the reliability and clarity of VHDL for design, combined with the power and flexibility of UVM for verification.

WT2.png

Below is a complete, runnable UVM verification environment (minimal but functional) for your VHDL counter DUT. It uses the wrapper + SV interface + UVM agent/driver/monitor/scoreboard/sequence/test approach and includes a counter_script.sh script to compile & run in Mentor Questa.

Design choices (kept simple and student-friendly)

​

  • DUT stays VHDL (your counter.vhd) — compile with vcom -2008.
     

  • A tiny SystemVerilog wrapper (counter_wrap.sv) connects to the VHDL entity.
     

  • counter_if.sv provides a clocking block for safe sync.
     

  • UVM agent consists of a sequencer, a driver, and a monitor.
     

  • The driver publishes an expected value (reference model) to a scoreboard via an analysis port.
     

  • The monitor samples the DUT output and publishes the observed value to the scoreboard.
     

  • The scoreboard compares expected vs observed and reports mismatches.
     

  • Everything is deterministic so the scoreboard can check results automatically.
     

Files — create each in the working directory with the exact filenames shown.

1) counter.vhd

VHDL

2) counter_wrap.sv (SystemVerilog wrapper)

VHDL

3) counter_if.sv (SV interface with clocking block)

VHDL

4) tb_top.sv (top-level to instantiate wrapper, interface, and start UVM)

VHDL

5) pkg_counter_types.sv (shared typedefs and item)

VHDL

6) seq_item.sv (UVM sequence item)

VHDL

7) sequence.sv (UVM sequence that sends deterministic transactions)

VHDL

8) driver.sv (UVM driver that drives interface and publishes expected values)

VHDL

Note: Driver here publishes one expected transaction when load happens and one when enable starts. To keep the example simple and deterministic, the sequence also controls timing (repeat waits), and the monitor will publish observed values each clock; scoreboard pairs expected events to observed ones by seq_id. This minimal arrangement is enough to demonstrate TLM paths and automated checking.

9) monitor.sv (samples DUT output each rising edge and publishes observed)

VHDL

10) scoreboard.sv (receives expected and observed and compares)

VHDL

Note: The scoreboard code above is intentionally minimal to show the TLM flow. In real testbenches you would correlate seq_id & timestamps, keep FIFOs, and handle many-to-many mapping. Here we demonstrate publishing expected (from driver) and observed (from monitor) and a basic check on load.

11) env.sv (creates agent, monitor, driver, scoreboard and connects analysis ports)

VHDL

12) test.sv (UVM test that starts the sequence)

VHDL

13) counter_script.sh (script to compile & run in Questa)

Make it executable: chmod +x counter_script.sh

VHDL

How to run
 

  1. Put all files (above) in one folder.
     

  2. Make counter_script.sh executable: chmod +x counter_script.sh.
     

  3. Ensure Questa tools (vcom, vlog, vsim) are in your path.
     

  4. Run ./counter_script.sh.
     

  5. Check the console/logs for UVM reports and scoreboard messages.

     

 

Note:
 

  • This is a minimal but complete UVM pack to get you started. The scoreboard here is intentionally simple to keep clarity. In production you will:
     

    • Use transactions with seq_id and timestamps to correlate expected vs observed precisely.
       

    • Use FIFO buffers in scoreboard, robust matching, and many checks.
       

    • Add monitors, coverage, and assertion-based checks.
       

  • If your Questa setup requires explicit UVM library compile flags or different invocation, adjust the vlog/vsim flags per your tool version (some setups need -uvm or to compile an included uvm_pkg).
     

  • If anything errors during compile, the first places to check are package/includes and the virtual interface config path strings — they must match names used in tb_top and env.

4- Self-checking (automatic assertions).

This testbench drives the DUT and uses assert to check expected results automatically.

VHDL

5- File-based driving (use TextIO to read stimulus/expected from a text file).

Format each line as: <load> <enable> <updown> <din> <expected> e.g. 1 0 1 5 5

VHDL

Note: Create stimulus.txt in same directory with lines like:
 

1 0 1 5 5

0 1 1 0 6

0 1 1 0 7

 

(Each line: load en updown din expected)

6- Random / constrained-random stimulus (simple LFSR-based pseudo-random generator).

This shows stress/random testing without third-party random packages.

VHDL

This simple LFSR gives pseudo-random stimulus. For constrained-random you would add conditional checks before applying a random value (e.g., only allow load when some condition holds).

7- Transaction-based / layered testbench (generator, monitor, checker separated).

This example uses a protected type as a small thread-safe queue (VHDL-2008) to pass transactions from generator to checker. The driver generates transactions and pushes their expected results into a protected queue; the monitor samples DUT outputs and pushes observed results; the checker pulls both and compares.

VHDL

This last example is slightly more elaborate but shows the layered style: generator (stimulus), monitor (observe), and checker (compare) are separate processes, communicating via protected queues. In real industrial testbenches you would use more robust FIFO logic, timestamps or sequence IDs for correlation, better error handling, and coverage.

testbench will run, but only on simulators that fully support VHDL-2008 features you used (protected types, protected bodies, shared variables, etc.). Good simulators for this are Mentor Questa (or ModelSim - newer releases), Aldec Riviera-PRO, and GHDL (open source) — all with VHDL-2008 mode enabled. Commercial tools from Cadence/Synopsys may also support these features but check your tool/version.

​

To run the testbench you should compile and run in VHDL-2008 mode. Example commands for common simulators:

​

For Questa/ModelSim:

​

  • vlib work
     

  • vcom -2008 counter.vhd
     

  • vcom -2008 tb_transaction.vhd
     

  • vsim work.tb_transaction
    or for batch run:

     

  • vsim -c work.tb_transaction -do "run -all; quit"
     

For GHDL:

​

  • ghdl -a --std=08 counter.vhd tb_transaction.vhd
     

  • ghdl -e --std=08 tb_transaction
     

  • ghdl -r --std=08 tb_transaction --vcd=wave.vcd (then open VCD in a waveform viewer)
     

Important notes and gotchas to keep in mind so it works reliably:

​

  • compile with VHDL-2008 (-2008 or --std=08) — protected types and shared variables need that standard.
     

  • some simulators accept protected type definitions only in packages; if you get errors, move the trans_queue protected type and its body into a separate package and use it in the testbench.
     

  • shared variables used across processes must be declared shared variable at a scope visible to all processes (you already used shared variable exp_q : trans_queue_impl; which is fine for VHDL-2008).
     

  • timing and race issues: your monitor and generator use wait until rising_edge(clk) correctly; still, if you see mismatches due to scheduling, add small wait for 1 ns; deltas or sample at stable points.
     

  • this testbench is simulation-only (not synthesizable) — protected types and queues are not synthesizable.
     

  • if your simulator gives “protected body not allowed here” or “shared variable not supported”, check the simulator’s VHDL-2008 support and consider moving queue implementation into a package.

​

 

In today’s industry, UVM (Universal Verification Methodology) is officially standardized only for SystemVerilog, not for VHDL.


That means there is no native or official UVM library written in VHDL that is supported by all major EDA tools.

​

However, engineers who design in VHDL still need advanced verification techniques similar to UVM.


So over time, a few unofficial or VHDL-based UVM-like frameworks have appeared.


They follow the same ideas as UVM (like components, drivers, monitors, scoreboards, and transactions) but are written completely in VHDL.

​

The most popular of these are:

​

  • UVVM (Universal VHDL Verification Methodology) — created by Bitvis/Nordic Semiconductor.
     

  • OSVVM (Open Source VHDL Verification Methodology) — created by Jim Lewis (SynthWorks).
     

  • VUnit — a Python-driven test framework for VHDL and mixed VHDL/SystemVerilog simulation.
     

These are not exactly “UVM in VHDL,” but they provide UVM-style structure and functionality using only VHDL — so you can write modular, reusable, self-checking testbenches without switching to SystemVerilog.

​

For example:

​

  • In UVVM, you have components like BFM (Bus Functional Models), sequencers, checkers, and scoreboards — all in VHDL.
     

  • OSVVM adds advanced features such as functional coverage, constrained randomization, and utilities like logs and scoreboards — again, all written purely in VHDL.
     

  • VUnit integrates with Python so you can automate and run many VHDL test cases easily, similar to UVM regression environments.

​

While SystemVerilog UVM remains the global standard for high-end ASIC verification,
many companies — especially those working heavily in VHDL-based FPGA or mixed-signal systems — use OSVVM, UVVM, or VUnit as their VHDL-side verification methodology.

​

They are not academic tools — they are professional, open-source, and actively maintained frameworks that are used in real projects by major semiconductor and defense companies worldwide.

​

Examples from Industry

​

  • OSVVM (Open Source VHDL Verification Methodology)
     

    • Created by Jim Lewis (SynthWorks), who is part of the IEEE VHDL standards committee.
       

    • Used by companies like NASA, ESA, Airbus, Thales, Lockheed Martin, BAE Systems, and many FPGA design teams.
       

    • Provides constrained random testing, functional coverage, scoreboarding, and logging — all in pure VHDL.
       

    • Fully compatible with Mentor Questa, Riviera-PRO, GHDL, and Xilinx Vivado Simulator (with VHDL-2008).
       

    • Considered the de facto standard for advanced VHDL-only verification.
       

  • UVVM (Universal VHDL Verification Methodology)
     

    • Developed by Bitvis (now part of Nordic Semiconductor).
       

    • Strongly focused on structured, modular, and reusable testbenches using Bus Functional Models (BFMs).
       

    • Widely used in automotive, aerospace, defense, and FPGA-based communication systems.
       

    • Comes with ready-made BFMs for standard buses like AXI, Avalon, SPI, UART, I2C, etc.
       

    • Very popular in Europe and defense companies, because it’s open-source and supports strong modular architecture.
       

  • VUnit
     

    • Developed by Aleris AB (Sweden), it integrates Python for automated testing and regression management.
       

    • Used in FPGA verification, especially in academic + research setups and startups.
       

    • Great for continuous integration (CI/CD) pipelines — runs easily with GitHub Actions, Jenkins, or GitLab CI.

 

VHDL-93 and VHDL-2008:

​

VHDL-93 is stable and simple but more verbose and less flexible (fixed widths, sensitivities must be correct).
 

VHDL-2008 adds modern features: generics for width, wait-based processes, protected types, unconstrained arrays, record ports, and shorthand port mapping — all of which make your code

​

shorter, safer, and easier to reuse and verify.
 

For new designs and testbenches, prefer VHDL-2008 (most modern simulators support it).

WT3.png

Records in vhdl

Simulation timing

© Copyright 2025 VLSI Mentor. All Rights Reserved.©

Connect with us

  • Instagram
  • Facebook
  • Twitter
  • LinkedIn
  • YouTube
bottom of page