Discrete Opamp Open Design

So the foke-lore on the street says. But, the laptop cpu architecture isnt optimized for number crunching and the word size is a lot smaller etc etc. Larger word size means more can be done in each cycle. Number crunching is what the super's do best as they have a lot of it to do. Also, the use of memory is much faster and a lot more of it. etc etc etc. [ I was at LLNL when the first Cray went in and subsequent ones etc.] Thx-RNM

The CRAY-2 added up to 2G bytes considered excessive. Can you quote some computational comparisons? The IC's were not up to it there was no solid state memory in 1985 that even remotely matches what we have now i.e. the CRAY had an 80MHz or so clock and 50nsec RAM.

I made my first prediction of 2013 and was right! I went to walk my dog at 5:15 and said when I get back Wisconsin will already be down 14-0. :(
 
I remembered later, when researching the literature for an graduate course project in 1998 I ran across a paper on phase retrieval in image processing from the 80's. They quoted massive numbers of 128X128 2D FFT's off loaded to a CRAY and the numbers are a joke by today's standards. FFT throughput is one of the bread and butter apps for number crunchers.

EDIT - This is really fun stuff in the nerdy sense, the paper I mentioned is somewhere in the appendix... http://digitus.itk.ppke.hu/~matyi/optika/Phase_Diversity/AO82_PRComparison1.pdf
 
Last edited:
Disabled Account
Joined 2012
Supercomputers are designed formost to be efficient; Computationally efficient is the metric of interest. This is where the men get seperated from the boys.

FFT are not a bread and butter app for these supercomputers. PC's can do that fast enough... even without being computationally efficient. Besides, who cares what a 30+ year computer can do compared to a PC today? The best PC today isnt a supercomputer. The IBM Seqouia is a brillient and highly efficient number cruncher... ten years in the design with IBM and LLNL computer scientists and others.

Update! Cray XK7 is now the worlds fastest supercomputer (Nov 12, 2012)! IBM Seqouia is second.

-Thx RNMarsh
 
Last edited:
for a while graphics computing was heading the way of VLWL (very long word length) or CISC computing, but parallel RISC based cores running higher speeds were more appropriate for the tasks a home or even most enterprise solutions needed I guess.

I was working with a Silicon Graphics PowerChallenge and O2 in Shake/Flame on IRIX for a while when I did a Special effects/compositing Scholarship back in the late 90's

I believe they Owned Cray for a while too before their demise
 
I am sure you can run different TRAN:s on each of those Linux boxes, but can you actually run one and only TRAN whose computational load s split by all these Linux boxes? I do not think so. But if so, no one would be more happy than me who does runs hour long TRAN analysis on daily basis!

IS there a SPICE around that can do TRAN and take advantage of multi core/CPU:s?

Our entire network of Linux boxes is configured as a compute farm and at night you could have dozens of machines running a transient analysis in parallel. Things that take days can take hours. The partitioning is special software but it does exist.
 
matrix inversion can be parallelized

go to the LTspice Yahoo Group (you have to sign up) - the database shows some execution speed vs processor - multi core do sim factors faster on modern multicore PC (maybe not exactly by # of cores but worthwhile)
 
Last edited:
I can say now that the CAD software that I am using has started to take advantage of both multiprocessors and multithreading rendering and 3D surface development is much faster than it has ever been. Unless you had the money for a seat in Catia or a few other programs you just couldn't take advantage of the hardware we have today. I am looking forward to having more processors and thread and being able to offload some of the number crunching to these supper new GPU cards. The day that I don't have to wait for a screen to refresh will be a time saver for sure.
 
I am sure you can run different TRAN:s on each of those Linux boxes, but can you actually run one and only TRAN whose computational load s split by all these Linux boxes?


Yes, last month we generated the eye diagram for a 10G TIA running a 2^31 bit pattern looking for inter-symbol interference. One circuit one LONG transient analysis. 3+days on a fairly loaded work station, a few hours in the farm.
 
Last edited:
Disabled Account
Joined 2012
Soooorry, you seem to.

really? and here i thought it was you who brought that up. whatever.

SIM users can rejoice in the fact that standards are being used which are designed to allow for scaled down PC use in the future. The supercomputer use of RAID and Linux and the like will make closing the performance gap a little easier and even faster than before.
Cray is alive and well. Their memory systems are being sold to HP, Dell, IBM thru a standard Lustre client. Cray's Sonexion (memory) is available for a wide range of x86 Linux InfiniBand-attached compute clusters. 1TB I/O bandwidth. The trickle-down theory may not work in economics/politics but it is working well in the computer world.
Thx-RNMarsh
 
Last edited:
RNMarsh,
While the speed of the SSD devices is very inviting and the price is coming down, though still not even close to competitive with disk based large drives that isn't what is stopping me from purchasing them. I have been using high speed disk drives for some time, whether 10,000 rpm or even some 15,000 rpm scci drives, and raid also increases speed to read/write my concern is not speed right now. My concern is the failure mode of the ssd drives. When they fail how can you recover the data? If you are running a raid array with stripping then you would have a redundant copy if you are set up in that configuration, but what of a single ssd drive? I have considered that many times to speed up my laptop doing CAD, but unless I constantly have an external drive connected doing backup what happens while in the middle of a design when the hard drive fails? I could potentially lose all the work I have just done. At least when a standard drive fails I can pay someone to recover the information of a drive, but I have not heard of a way to recover data from what is in essence ram chips. What are the thoughts on that critical problem.
 
really? and here i thought it was you who brought that up. whatever.

The trickle-down theory may not work in economics/politics but it is working well in the computer world.
Thx-RNMarsh

Richard you know me, I'm more interested in how much computing power the Linux geeks give me for free than how much my tax dollars can buy. I only mentioned FFT's because I observed it was worth it for astronomers to buy CRAY time for problems that essentially are nothing but massive two-D FFT crunching. The oil companies also use them for three-D feature extraction from massive data sets (the ones that are done with the arrays of explosions and geo-phones). This time 3-D FFT's like a giant CAT scanner.

Trickle-down how about trickle-up? :D

"The world's fastest supercomputers have come back to the U.S. In June, the title was claimed by a machine named Sequoia at Lawrence Livermore Labs. Monday, at the Department of Energy's Oak Ridge National Laboratory, what could be an even faster computer comes online. It's called Titan and it would not have been possible were it not for the massive market for video games."
 
Last edited:
Richard, I meant SGI's demise, not Cray, they offloaded cray a couple years before they filed chapter 11 for the 2 or 3rd time.

yeah i'm using SSD, love it it doesnt replace the hard-drive, thats not the point, just use it for system drive and applications, current project files, everything else is still on HD. mine is 240GB OWC Mercury Extreme Pro 6G SSD. its rather secure against loss as is server grade, but everything is backed up when I remember.

Up to 100X greater data protection than the highest rated enterprise class conventional hard disk drive (HDD) provides.

5yr warranty, wear-levelling, 7% over-provisioning etc etc, its speced for enterprise computing, so has to be reliable or they end up in court

combined with an i5 its pretty astounding performance, even in Maya it pwns what I remember of the SGIs for most tasks except raytracing perhaps. I didnt have much experience with the SGIs and3D though, my Scholarship was for compositing
 
Last edited:
Disabled Account
Joined 2012
I've seen demo's of SSD on PC's last year and was really impressed with the speed. I wanted to convert to all SSD right then and there but then I saw the price tag. Then bought a combo SSD/HDD and that helps. But, nothing compared to all SSD. Back up is a must anyway so loss of data is no serious setback. Now the price is down to us non-sponsored human beings, I can do the exchange.... 250-500 for C: and D: would be the rest in HDD. Or some such. I am just bringing this supercomp up and memory because it is really all about SIM's and the super-memory systems such as the Cray one I mentioned used a lot of both SSD and HDD. I'll do it this year sometime--- i7 with SSD. The SSD is such a Huge improvment - all should try to upgrade with some or all SSD. We all do backups, anyway. Dont we? Thx-RNMrash
 
Last edited:
Disabled Account
Joined 2012
Richard you know me, I'm more interested in how much computing power the Linux geeks give me for free than how much my tax dollars can buy. Trickle-down how about trickle-up? :D

..... and it would not have been possible were it not for the massive market for video games."

I forgot. forgive me. The original use and still is used for that purpose was the phys of nuclear weapons design -above ground sim. - [I'll now get a call from Home Land Security]. Which is alway a reason why they are located in secure places. The physics is basic research in many ways... what goes on inside the core of the sun is similar study. High temps, high pressures, etc. The work - even decades ago involved huge 3-D matrix manipulation in near real time for modeling/visuallizing what is happening. The High-Def video has been there before PC games. But then I dont do games at all... so havent followed where they originated as far as R&D goes. Thx-RNMarsh
 
Last edited:
I forgot. forgive me. The original use and still is used for that purpose was the phys of nuclear weapons design -above ground sim. - [I'll now get a call from Home Land Security]. Which is alway a reason why they are located in secure places. The physics is basic research in many ways... what goes on inside the core of the sun is similar study. High temps, high pressures, etc. The work - even decades ago involved huge 3-D matrix manipulation in near real time for modeling/visuallizing what is happening. The High-Def video has been there before PC games. But then I dont do games at all... so havent followed where they originated as far as R&D goes. Thx-RNMarsh

It's where the money goes, what does it matter that the scientists benefit from the destruction of our childrens minds. The latest supercomputers are based on NVIDIA GPU's developed for video game rendering. I personally hate all video games not too keen on the weapons either.