Feedback

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
I can't say anything to Waly's post. That's beyond my horizon.

But to the addmittance/impedance matrices:
One can interpret them in a graph theoretic way with loop-matrices and cut-matrices. And they are not independent.
For details, I had to reread it by myself.

W. K. Chen: Applied Graph Theory
Vlach: Computational methods for circuit analysis
Pen-Min Lin: Symbolic Network Analysis

By the way there exist graph theoretic interpretations for determinants.

Then in nodal analysis I=G*U the I are independent current sources and the V are node voltages.
In mesh analysis V=Z*I the I are loop currents and the V are independent voltage sources.
So the V and I are not the same in both methods.

I think about the difference between the GFT and Tians method.
And if it is possible to 'realize' the GFT in a DPI/SFG way.
Or has this already done somebody?

Josef
 
...But to the addmittance/impedance matrices:
One can interpret them in a graph theoretic way with loop-matrices and cut-matrices. And they are not independent.

Correct, they are not simple inverses but they are not independent.
That was point of my earlier post.
...But they both provide the correct answer so there must be some deeper structure that I don't quite understand.


For details, I had to reread it by myself...
Thanks for the additional references. I add them to your earlier WK Chen reference, which I did find a copy.

By the way there exist graph theoretic interpretations for determinants.
What are they?

Then in nodal analysis I=G*U the I are independent current sources and the V are....
In mesh analysis V=Z*I the I are loop currents and the V are...
So the V and I are not the same in both methods.

That is an excellent explanation.

I think about the difference between the GFT and Tians method.
And if it is possible to 'realize' the GFT in a DPI/SFG way.
Or has this already done somebody?

I also think about this but have not done it myself nor have I been able to find anyone who already has.
I have already searched this forum but most of the posts about GFT seem to be only my own, or in reply.
So I need to search more widely.

Best wishes
David
 
Last edited:
By the way there exist graph theoretic interpretations for determinants.
What are they?
E.g. from W.K. Chen Applied Graph Theory

Theorem 2.28: The determinant of the node-addmittance matrix Yn of an RLC network is given by det(Yn) = SUM( tree admittance procduct ).

Simple example with an R-network with admittances.

Code:
    1-----y1------2
    | \           |
    |  \          |
    y4   --y5--   y2
    |          \  |
    |           \ |
    0-----y3------3
The trees in the graph are:

y1.y2.y3, y1.y2.y4, y1.y3.y4, y1.y3.y5, y1.y4.y5, y2.y3.y4, y2.y3.y5, y2.y4.y5

Incidence matrix Yn of the network is:

{{y1+y4+y5, -y1 , -y5},
{ -y1 , y1+y2 , -y2 },
{ -y5 , -y2 , y2+y3+y5}}

The determinant is:

Det(Yn) = y1.y2.y3 + y1.y2.y4 + y1.y3.y4 + y2.y3.y4 + y1.y3.y5 + y2.y3.y5 + y1.y4.y5 + y2.y4.y5

The terms in the determinant match with the trees.

There are enumerations over trees for directed and undirected graphs
and a lot of more stuff.

Best wishes
Josef
 
Mr. Zan,

- Are you able to write the linear differential equation that, together with the boundary condition (essentially defined by the independent sources) completely and uniquely characterizes the time evolution of an arbitrary electrical circuit?

- Do you understand the concept of phase space and the phase space method for analyzing the solution of a dynamic system (that is, solving the time dependent differential equation)? A particular solution is a time parametrized trajectory in the phase space, can you visualize in the phase space all possible solutions (each depending on a set of boundary conditions) of a certain linear differential equation?

- Do you understand the isomorphism between the space of linear operators L(U, V) with dim(U)=m dim(V)=n and the space of m X n matrices?

- For the particular case of a linear differential operator and boundary conditions, can you calculate the coefficients and write the equivalent n X n matrix? Are you surprised by the result?

If the answer to any of the above is "no" then you need a crash course in vector space theory and the connection to linear algebra. I wish these things were more intuitive, unfortunately they are not. I can't further help on a public forum, I already went above and beyond.
 
Last edited:
what, beyond satisfying intellectual curiosity can we expect

I don't think many "visualize" beyond 3 dimensions, although some can apparently do a bit of mental manipulation in 4-D

when I occasionally search for "geometric" in control I think I see 2 fundamentally different approaches:

Differential Geometry of the phase space: 2016 | Nonlinear Control: Geometric Approach - TOKYO TECH OCW

and "Algebraic Geometry" of the matrix polynomials: Methods of Algebraic Geometry in Control Theory: Part I - Scalar | Peter Falb | Springer


I haven't been motivated to step up to those - my impression is the differential geometry/phase space is somewhat useful for strongly nonlinear systems

but in Audio its pretty much only PA where we really need gain device strongly nonlinear operation - and then we finesse with bias, complementary stages

handling PA clipping, slew limiting, SOA protection may well benefit from more control theory sophistication than I currently have or see locally


but I suspect there really isn't that much practical simple, discrete analog amplifier circuit improvement likely to come from stepping up

possibly could do more if you want to go digital, MHz DAC driving output Q base' with ADSL Buffers, then you could have scope for sophisticated nonlinear algorithms
 
Last edited:
Ironically Jan, they are the same. :)

The original question was about the interpretation of the determinant of an admittance matrix.

The usual geometric interpretation of a determinant is equivalent to the volume, in the phase space, occupied by all time parametrized trajectories of the equivalent (isomorphic) linear differential operator. I would think (just my guts feeling) this volume could be interpreted as a metric of "circuit complexity".

The Cramer rule just picks one of these trajectories, in fact enforced by the boundary conditions.

Simply nitpicking, as usual.
 
Last edited:
what...can we expect?

Also for Jan and Andrew.

I wanted to understand the internal feedback loops in a power amplifier, to optimize performance.
For instance to decide if local feedback was better applied around the VAS, as in a classic "Blameless", or around the OPS and VAS, as Cherry claims, or VAS and IPS, as Bob Cordell has used, or some combination or alternative.
I finally have some nice analysis results for this, subject to the condition that there is no "null loop" path.
I would like to include the null loop effects because that would advance the state of the art.
Tian and Ochoa do not consider the null-loop, Bode and Middlebrook do.
But much of Bode is formulated in terms of matrix determinants that I find difficult to understand intuitively, so I made a query about the interpretation of determinants.
So what I hopefully expect is better optimized power amplifiers, to me that means indisputably inaudible distortion with reduced complexity.

Best wishes
David
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.