2

I'm reading some old microcontroller code for a PID algorithm. There's a strange function (to me) that seems to work like this:

$output = \frac{1}{n}input + \frac{n-1}{n}previous\_output$

The value $n$ varies.

Over several iterations, it will eventually output a value very close to the input, but sudden jumps in the input create a lag that takes a few cycles to catch up.

This function is used in multiple places, for instance:

  • it's used on the setpoint when calculating the error - the current process variable is subtracted from f(setpoint) (where 'f' is the function I described above) instead of the true setpoint. This seems to produce a weighted set point, where the weight is inversely proportional to the rate of change - is that correct?
  • it's also used in another component of the PID when comparing the true error to the previous value. The previous value is put through this function, instead of using the true previous value. In this case, it's harder to understand it's purpose.

Is this a common function used in PID controllers? I haven't found a description that quite seems to explain this, but if it might be a case of just not knowing the right keyword to search for.

John B
  • 121
  • 2

0 Answers0