Discrete Opamp Open Design

Disabled Account
Joined 2012
The latest IBM supercomputor uses IBM cores. The latest Crey uses AMD CPU's and NVidia GPU. China's supercomp uses Nvidia gpu. -RNM

SW: ".... the destruction of our childrens minds" ?? Last I saw, the USA children today ranked way far down in mind power.... So, it isnt the educational system that destroyed thier minds.... its those mindless video games?

[just poking fun] Thx-RNMarsh
 
Last edited:
www.hifisonix.com
Joined 2003
Paid Member
I discussed this with a sales guy, there are definately more and less reliable SSD's. I was thinking of getting one but not as an only drive. I could see putting all my swap space on one, one can always rerun sims in case of a crash.



I remember back in the late 1970's or early 80's there was an advert in Byte for a 1 Mbyte RAM hard drive. It was the size of 2 or 3 bricks and had battery b/up (IIRC). Times have changed.
 
Now that is good news, Scott! Thanks!

Fascinating that SPICE can split its workload during TRAN. I am running some TRAN sessions with time lengths of thousands of seconds, and it takes many hours to do each and I have dozens to do. Now I use two computers and am planning for a 3rd machine to speed things up.

Would be cool if it was possible to rent CPU time from some super computer somewhere on the planet. One that can run code from MicroCap SPICE :)


Best reagrds,
S.

Yes, last month we generated the eye diagram for a 10G TIA running a 2^31 bit pattern looking for inter-symbol interference. One circuit one LONG transient analysis. 3+days on a fairly loaded work station, a few hours in the farm.
 
In a sense I agree with Scott here. Our children are being more and more served finished activeties instead of creating. There are a few kids that do some of their own software, but the majority are just being fed what other people have made. Oh, all can use an iPod and ahve the latest Android app, but where is the intelligence instreasing activeties in that?

I have two teenagers and if all the time they spend on FB or Twitter, was used to for ex program some own software that they have designed, that would give them more satisfaction in the long run, and if all teenagers did some more creative thigns than just surfing the web and posting on FB, educational skills would increase instead of decrease.

When I was a teenager, heck, I had so much spare time as there was no FB etc, that I sat down and learned to program computers instead. Nerdy? Heck, yes! Did it make me smarter? Heck, yes :rolleyes:

Maybe I am walking on thin ice here...


The latest IBM supercomputor uses IBM cores. The latest Crey uses AMD CPU's and NVidia GPU. China's supercomp uses Nvidia gpu. -RNM

SW: ".... the destruction of our childrens minds" ?? Last I saw, the USA children today ranked way far down in mind power.... So, it isnt the educational system that destroyed thier minds.... its those mindless video games?

[just poking fun] Thx-RNMarsh
 
Would be cool if it was possible to rent CPU time from some super computer somewhere on the planet. One that can run code from MicroCap SPICE :)


Best reagrds,
S.

I looked and could not find any references for this. I don't know of any semiconductor company that owns a supercomputer configured for circuit simulation. Intel would be the best bet, I don't know.

The price and power consumption would be hard to justify. There is this The cheap supercomputer | Kagaku a $400K computer "home" made with 400 or so GPU's. We demoed one made with 100 or so being offered as a potential product that had HSPICE compiled on it. Nothing came of this.
 
Disabled Account
Joined 2012
There is no reason why a dual or quad core PC couldnt do sim a lot faster --- except for the archetecture is all wrong for doing such.... the uPC wasnt designed for that purpose and now you are stuck with it. If Linux could become more mainstrean - maybe Intel would develope a more useful micro-CPU arch for engineers.

There was a time when the mini-cpu got started but it died when HP bought them (DEC) out. The DEC LSI-11 was designed for just what we need now... it had the arch for doing number crunching most effeciently and was the bridge between mucro and mainframe. We still need that bridge product. -Thx RNMarsh
 
Last edited:
diyAudio Member RIP
Joined 2005
There is no reason why a dual or quad core PC couldnt do sim a lot faster --- except for the archetecture is all wrong for doing such.... the uPC wasnt designed for that purpose and now you are stuck with it. If Linux could become more mainstrean - maybe Intel would develope a more useful micro-CPU arch for engineers.

There was a time when the mini-cpu got started but it died when HP bought them (DEC) out. The DEC LSI-11 was designed for just what we need now... it had the arch for doing number crunching most effeciently and was the bridge between mucro and mainframe. We still need that bridge product. -Thx RNMarsh

I have fond memories (npi) of the LSI-11. It ran my spectrometer and allowed the astronomers to play with their data at the same time. Most of the memory was provided bu two clunky 8" floppies; the semiconductor memory was all of 32k words.
 
If Linux could become more mainstrean - maybe Intel would develope a more useful micro-CPU arch for engineers.

Thx RNMarsh

Richard you do say some things that I just don't understand sometimes, I consider Linux totally mainstream, we are not alone in having it as the prefered environment for all engineering purposes. Many major software vendors support it and not Windows. Linux is probably the best trickle-up of all.

Intel is interested in making money, the demand has been there for a long time I guess not enough.
 
Disabled Account
Joined 2012
I know I am cryptic at times ( OK, a lot of the time). I mean if we could ween average joe citizen off Microsoft Windows and over to Linux OS we would stand a chance of doing better by allowing for a new cpu archit... one which would be more like a mini-computer. Just how do you turn the big Windows/Intel uPC ship around? Or, are there alternatives (mini-cpu) that are affordable, PNP, for personal use that I dont know about? What cpu do you use -- and software?

I used to have -in my home- an LSI-11 (RT11 OS) maxed out mem and drives of the time..... math coprocessor and the works... efficienct codes and less operating overhead made it do FFT very fast. I used DADiSP software at home ... still available for PC too. Look it up. Superfast -- I could change part values on the schematic and see the bode-plot etc in real time change in the other window. In fact I could have a dozen windows open on the display screen and see the effect in each window's plot of what-ever change made in one window. You dont need a tonne of memory to do this but efficient code and cpu designed for number crunching (eg 'scientific'). And, it didnt use windows or linux... all machine code.
BTW - these number crunching machines are misleading when you compare memory size needed to get work done with todays x86 derivitive machines. The many layers of code and switching data I/O of mem is the best reason to change over to SSD. You'll think you just got hooked up to a Mini-cpu machine. really. Thx-RNMarsh
 
Last edited:
There is no reason why a dual or quad core PC couldnt do sim a lot faster

Simulation of dynamic systems has extreme data dependencies ,
that is , a computation need the previous results to be be executed ,
hence there s essentialy serial code that can hardly be parralelized.

Just try simulating any circuit with a multicore CPU , you ll see that
most of the work is done in a single core , with a second core being
marginaly used..

I looked and could not find any references for this. I don't know of any semiconductor company that owns a supercomputer configured for circuit simulation. Intel would be the best bet, I don't know.

Actually it is years that they are using all their available PCs in a giant cloud
to increase computation throughput...
 
Actually it is years that they are using all their available PCs in a giant cloud
to increase computation throughput...

I already said we do that yesterday. I don't see Linux being served by a different CPU architecture. There simply has to be a compelling business argument. The only modern CPU that I've programmed in assembler was a 68040, Apple's prefered compiler vendor would not support any number crunching optimizations. The 68040 with co-processor had 8-80 bit FP registers. (great for pipelining FFT's) but the compiler even ignored variables declared register in C. I wrote a micro-optimzed FFT and posted back in the USENET days. A year or so later a guy from GE medical e-mailed me and asked for my source code for their next CT scanner. A strange way for me to help a customer.

There is a LOT more to this than simply hardware, it simply is not practicle anymore to depend on your software support to program/maintain a large package in assembler/machine code.

The holidays were not the best time to order parts and boards. The ETA is Monday for everything, so maybe next weekend a real prototype test.
 
diyAudio Member RIP
Joined 2005
I already said we do that yesterday. I don't see Linux being served by a different CPU architecture. There simply has to be a compelling business argument. The only modern CPU that I've programmed in assembler was a 68040, Apple's prefered compiler vendor would not support any number crunching optimizations. The 68040 with co-processor had 8-80 bit FP registers. (great for pipelining FFT's) but the compiler even ignored variables declared register in C. I wrote a micro-optimzed FFT and posted back in the USENET days. A year or so later a guy from GE medical e-mailed me and asked for my source code for their next CT scanner. A strange way for me to help a customer.

There is a LOT more to this than simply hardware, it simply is not practicle anymore to depend on your software support to program/maintain a large package in assembler/machine code.

In this general connection, always reminded of this classic piece: Real Programmers Don't Use PASCAL
 
In this general connection, always reminded of this classic piece: Real Programmers Don't Use PASCAL


Love it! Yes - "Real Programmers write self-modifying code, especially if it saves them 20 nanoseconds in the middle of a tight loop."

I once wrote a self modifying convolution algorithm that eliminated redundant (most useful kernels are symmetric) multiplies and turned all 1, 0, -1 operations into adds/subracts on the fly.
 
Disabled Account
Joined 2012
Here's the point. The Intel/Microsoft lash-up could be a lot better.... There is real need for something better that is affordable to individuals and small business's. Just because it isnt being done right now is NO sign that there is NO Compelling reason for it. We wouldnt have High Def Tv and G4+ etc without industry being forced to change. There exists a compelling need. The PC we have in our homes was designed for data manipulation. Not for math calculations. Thus we have work arounds -- like off loading the video data to a seperate dpu with its own memory. And the addition of DSP's for the heavy lifting with math... such as sim's, real-time signal processing and subjects where time to execute is critical and predictable. Both code and hardware must be extreamly efficient to accomplish this. [enter stage left - DSP]. Most computers (like the PC) are not optimized to do both - data manipulation and mathematical calculations. The existing PC is trying to do both but was set up from the beginning -optimized-to be a data manipulator (data base management, word processing, etc). IMO its time to have a PC type that is optimized for engineering, science and digital signal processing. Let that PC do data manipulation as a secondary function....
OK Now I'm off my soap box. [but still upgrade asap to SSD] Thx-RNMarsh
 
Current X86 CPUs do not ressemble to their ancestors.

X86 code is no more directly executed , it is translated in the fly in simple
RISC instructions.

Moreover , the old X86/87 instructions sets are there only for backward compatibility ,
most of the used instructions are at most ten years old starting with SSE2 up to
SSSE3/SSE3/4.1/4.2 and more recently AVX , most dedicated to maths computation ,
particularly the recent FMA3/4 (available on AMD CPUs and starting next year for Intel)
wich allow a floating point multiply+add in a single oiperation and with no intermediary
rounding that would render the operation less precise than a sequentialy executed one.

Currents CPUs have more than 1000 instructions , to compare with the 200 or so
twenty years ago.

Quad cores CPUs and beyond can process more than 100 gigaflps in double
precision and this will be vastly increased in the next years thanks to GPUs being
capable of offloading the CPU of Floating Point massively parralelizable tasks , so
i dont think that current PCs are not adequate for scientific calculations..
 
I already said we do that yesterday. I don't see Linux being served by a different CPU architecture. There simply has to be a compelling business argument. The only modern CPU that I've programmed in assembler was a 68040, Apple's prefered compiler vendor would not support any number crunching optimizations. The 68040 with co-processor had 8-80 bit FP registers. (great for pipelining FFT's) but the compiler even ignored variables declared register in C.

Linux is served well on ARM and PowerPC; it's just that these machines are
either not performance-oriented or out of reach financially (current IBM
mainframe PPCs). The only real alternative to the x86 died with the DEC alpha.

And regarding 68K floating point: you were quite lucky. Some friends of
mine bought a Unix System V source license (that cost an arm and a leg)
and built 68010 machines around it.. Some day an alpha customer called...
"Your C compiler produces code that the assembler cannot translate!?!"
It turned out that the Motorola compiler happily generated VAX floating
point instructions in the 68010 asm output.

Too late here (5 am) to continue...

regards, Gerhard

(written on a Dell Precision laptop running at 2.5 GHz * 4 CPUs *2 threads,
2*512 GB SSD, 16 G RAM, almost idle..)
 
Love it! Yes - "Real Programmers write self-modifying code, especially if it saves them 20 nanoseconds in the middle of a tight loop."

At THAT time, we had a sign at our door in the Berlin Technical University:

Phearless
Pascal
Phreaks

and UCSD Pascal on a LSI11/2 or a 68010 really was a Really Good Thing(R).
Pascal MT+ on Z80 was not that bad, but Turbo Pascal on Z80 was GREAT!!!

We also had the first European machine (Dual PDP11/40E, but running only
one processor) with Unix on it. There were only 4 or 5 PDP11/40Es ever
(E was for microprogrammable. Someone at TUB made a microprogrammed
pascal interpreter for it, Peer Brinch Hansons' version, and it ran like a
scalded dog. )

regards, Gerhard