-1

I do have a problem understanding how always_ff works in a way of creating a mesh of logic gates.

What do I mean ? When I use always_comb like here :

module gray_koder_dekoder(i_data, i_oper, o_code);
    parameter LEN = 4;
    input logic [LEN-1:0] i_data;
    input logic             i_oper;
    output logic [LEN-1:0]  o_code;
    
    int i;
    always_comb
    begin
        o_code = '0;
        i = LEN-1;
        if (i_oper == 1'b1) // 1'b1 - operacja 
        begin               //       kodowania
            o_code = i_data ^ (i_data >> 1);
        end
        else    // dla kazdej innej wartosci
        begin   //     realizuj dokodowanie
            o_code = i_data;
            for (i=LEN-1; i>0; i=i-1)
            begin
                o_code[i-1] = o_code[i] 
                              ^  i_data[i-1];
            end
        end
    end
endmodule

So how do I see it. At the beggining the program sees that output is 0000, now if the i_oper is equal 1 so the input is 1 then it checks changes the o_code to i_data ^ (i_data >> 1) so now the program want's to do combination of logic gates for this operation but if the i_oper is equal 0 then the program makes another set of logic gates to get different o_code.

So the always_comb gives the final result for every bite in the i_data that results in o_code.

So my teacher said that always_comb is "blocking" but always_ff is not "blocking" I don't get it ... So the always_ff doesn't give the final result of logic gates for the input to get a specific output ?

Another example of always_comb :

module gray_dekoder (i_gray, o_data);

    parameter LEN = 4;
    input  logic [LEN-1:0] i_gray;
    output logic [LEN-1:0] o_data;

    always_comb
    begin
        
        o_data = i_gray;

        for (int i=LEN-1; i>0; i=i-1)
            o_data[i-1] = o_data[i] ^ i_gray[i-1];    
    end

endmodule

So at the beggining the program sees that the output is 0000 so it will make a set of logic combination to have 0 at the end. Then he sees loop for that modifies the output so the program checks every bit of the input like bit nr 3 then nr 2 then nr 1 etc. and creates for every input specific output so now the output is not 0000 anymore but set of instructions that modifies the output made from loop "for"

so the always comb gives a final result from the analizing the whole code from top to bottom of "always_comb" and creates a set of instructions/set of logic gates that helps it. Because always_comb overwrites the previous instructions like 0000 it was a basic instruction but then was overwrited by the loop "for"

But maybe I think wrongly because if instruction doesn't overwrite the 0000 instruction like here :

module replace(i_a, i_b, o_replaced, o_error);
    parameter BITS = 4;
    input  logic signed [BITS-1:0] i_a, i_b;
    output logic signed [BITS-1:0] o_replaced;
    output logic o_error;

    int i;
    always_comb
    begin
        o_replaced = '0;
        if(i_b < 0 || i_b > BITS)
            begin
            o_error = 1;
            o_replaced = 'x;
            end
        else
            begin
            i = i_b;
            o_replaced = i_a;
            o_replaced[i-1] = 1;
            o_error = 0;
            end
    end
endmodule

I have here 0000 output that isn't overwrited for "else" So I don't know what happens then.

I think of always_comb as an "final result" that gives a set of instructions how to create logic gates. But the final result is at the end, so if something changes then the beggining result doesn't matter "overwrited" but with if loop it doesn't work with my mind set.

So always_ff I heard that it doesn't give a final result that it can stop at any point not like in always_comb that the program analysis from top to bottom.

user331990
  • 11
  • 4
  • Does this answer your question: https://stackoverflow.com/questions/23101717/difference-among-always-ff-always-comb-always-latch-and-always ?? – Mikef Nov 19 '22 at 23:53
  • @Mikef Not really what does it mean "non-blocking" ? Also blocking for always_comb means that he give the final result for every input to output like I've written above, but Always_ff ? It doesn't give a set of logic gate combination for output ? Because that's what blocking means I guess. Or what does it even mean – user331990 Nov 20 '22 at 00:50
  • An understanding of the Verilog event scheduler (how Verilog simulation works), & blocking and non-blocking assignments (simulation uses them to model combinational & sequential logic) would help. Not to be dismissive, but there are a lot of explanations of both of these concepts on the web. Authors have taken the time to write and polish explanations (some short some very long). I won't be able to explain better than some have already. Recommend Search 'Verilog blocking non-blocking' and 'Verilog event scheduler', keep in mind the points made on the link I sent previously. – Mikef Nov 20 '22 at 01:36

1 Answers1

1

Verilog is designed to represent behavior of hardware and it is not a regular programming language. It operates different semantics.

At a very top glance, hardware consists of combinational logic and flops (and latches). From the other point of view hardware is a set of parallel functions which are synchronized across design by clocks. This means that at a clock edge a lot of hardware devices start working in parallel and they should produce results by the next clock edge. Those results could be used by other functions at the next clock cycle.

Roughly, combinational logic defines a function, flops provide synchronization.

In verilog all those devices are described with always blocks. TUse of edges, e.g., @(posedge clk) provides synchronization points and usually defines flops in the code. A simple function and a flop look like the following.

// combinational logic
always @* // you can use always_comb instead
   val = in1 & in2;  // a combinational function

// flop
always @(posedge clk) // you can use always_ff instead
   out <= val;  // synchronization

So, in the example val is calculated by the combinational function and its synchronous out is made available to other functions to be used by a flop. You can see progression of clocks and results in a waveform.

So, this is what always_ff is doing, just providing synchronization and expressing flops for synthesis.

In general, always, always_comb, always_ff and always_latch are identical. The last three are system verilog blocks and just provide additional hints to the compiler which can run additional checks on them. I intentionally used just always blocks in my example to show that. There are some other conditions which need to be programmed to cleanly express the intention. So, your assertion about different working of always_ff has no base. It works the same as other always blocks.

What I think confuses you, is use of blocking (=) and non-blocking (<=) assignments. It does not matter for synthesis which one you use, but it matters for simulation. The difference is described and numerous documents and examples. To understand it properly you need to look into verilog simulation scheduling semantics.

But the rule of thumb is that you should use non-blocking assignments (<=) in flops and use blocking (=) in combinational logic. In flops '<=' allows simulating real behavior of flops. Remember that hardware is a massively-parallel evaluation engine. Consider the following example:

always_ff @(posedge clk) begin
   out1 <= in1;
   out2 <= out1;
end

The above example defines at least two flops working at posedge clk. out1 and out2 must be synchronized at this clock. It means, the flops have to catch values which existed before the edge and present them after the edge. So, for out1 the value existed before the edge is in1, evaluated by a combinational logic. What would be the value of out2? Which value existed before the edge? Apparently, the value of the out1 before it gets changed to the new value of in1.

clk ___|---|___|---|___
in1  0
out1 x   0  << new value of in1
out2     x  << old value of ou1
               .
in1          1 .
out1           . 1 << new value of in1
out2           . 0 << old value of ou1

So, after evaluation of the block, the at the first edge the value of out2 will be 'x' (previous value of out1), at the second clock edge it will finally get value of 'in1' as it existed at the previous clock cycle.

I hope it would make your understanding a bit better.

Serge
  • 11,616
  • 3
  • 18
  • 28
  • I do have 2 questions. What are blocking and not blocking assignments ? What is my basic problem is that always_comb give back a list of instructions (it checks situation for every bit of the output and the final finals result is the instruction) that is what blocking means (that's what my teacher tried to tell me or I didn't understand. Non blocking means that the program doesn't have to check the whole always_ff it can stop at some point not like in always_comb. And that was wierd for me. Because the program makes the logic gate stucture. How does he know every possibility – user331990 Nov 20 '22 at 19:03
  • if in always_ff is not blocking which means doesn't check what happens with every bit separatelly and gives the final result (set of instructions or how to combine inputs, which logic gates to use to get the certain output for certain bit) if the always_ff is nonblocking so it stops somewhere. So it doesn't analize from top to bottom of always_ff. – user331990 Nov 20 '22 at 19:05
  • Sorry for many comments if it's unclear I can make edit for the main post ? – user331990 Nov 20 '22 at 19:05
  • Judging by your questions, you lack basic knowledge of verilog and hardware workings. Explanation of basics at this level cannot be provided in this forum. I suggest you to go back to reading about hardware design and verilog and do some tutorials. There is a lot of information around about blocking/non-blocking assignments. Your wording about separate bits, results, top-bottom analyises in always blocks do not make much sense. – Serge Nov 20 '22 at 19:44
  • That's what my teacher at least told. Look at my example nr.1 from main post. We have output with 0 in every bit. And if i_oper = 1 then o_code = i_data^(i_data >>1) set of instructions for every bit so o_code [0] = i_data[0]^(i_data[0]>>1), o_code [1] = i_data[1]^(i_data[1]>>1) etc. for else it is the same o_code[i-1] = o_code[i]^i_data[i-1] and like that for every bit. So the computer knows from analizyng the code from top to bottom for every bit what gate logics he need to use. But for always_ff nonblocking means that it doesn't have to execute from top to bottom. – user331990 Nov 20 '22 at 23:09
  • https://www.chipverify.com/verilog/verilog-blocking-non-blocking-statements . So I don't understand how the program knows which logic gates to use when some part of the code can be skipped in always_ff. Sorry for not understanding it but I'm trying to read it, I know how to use it but I don't understand how program interpret it, because I can make a mistake using always_ff. – user331990 Nov 20 '22 at 23:10
  • aways_ff skips **nothing** . Nor does any other block. *non-blocking* assignments schedule results to appear in the next clock cycle while blocking provide them immediately. But first you need to understand what clock and what clock cycle is and how hardware reacts to clocks. – Serge Nov 21 '22 at 01:13