Discrete Opamp Open Design

Disabled Account
Joined 2004
Some of the engineering software used on the PC uses only the GUI interface and under it all is a seperate OS running the show or in machine language. Such as DADiSP, I mentioned. Originally, DADiSP ran under DOS while the front end looked like what ever Windows that the machine came with. Seems the more you can get rid of Windows OS etal the faster everything becomes.

Hardware and software improvements will optimize what we have to work with. Then add SSD.

Thx-RNMarsh

Richard I think your view is rather dated, maybe someone like Wahab could fill you in on what has happened in the last few decades. Comments like "engineering software using only the GUI inteface" makes no sense at all (even to me).
 
Disabled Account
Joined 2012
Quite possibly that using old data is the wrong thing to use as an example of the PC short's-comings for engineering use. Especially if you never have been exposed to supercomputer operation in your work invironment. But its the home PC I am talking about here as a comparison.

I have used computers at home since day one and still use them at home and still am frustrated with them. I am not saying businesses dont have access to great stuff to use today. I dont have great stuff for home PC use. I dont have a Linux working system and programs that run under it for my PC. I dont have a PC that is fast enough and probably never will be - call me spoiled. And, I cant get a high-end Dell with Bridge-like quality for $1500. IMO the average PC is afr from what I want. I want what you have for an affordable PC-like -- price. I am talking about the PC/Windows world and what is available in that world.... Not what is or might be available as SOTA for industrial use.
That would be the proper context to read me.

I see users here with free SPICE software and even MicroCap (which i have) having problems that industry doesnt have and I am sure the gap could be closed (somehow). Maybe Microsoft has killed any patience i used to have ... but even Microcap doesnt work well in many areas. So, you tell me: what does it take to get a Bridge performance PC in my home. i am all ears and want recommendations. Thx-RNMarsh
 
Last edited:
Disabled Account
Joined 2012
Richard I think your view is rather dated, maybe someone like Wahab could fill you in on what has happened in the last few decades. Comments like "engineering software using only the GUI inteface" makes no sense at all (even to me).

Oh No, thanks... that's way more than I need or want to know. I only want to use it -- ditto software. I know the instabilities and error proneness is less likely in upscale workstations. ECC memory, certified software drivers, stable OS et al. That's a clue to me. Still dont understand why such beasts are not better than they are. But, I really dont want to know, either. Cause its just excuses and the solution still costs a lot more. -RNM
 
Last edited:
launching and running applications from terminal is common practice on Mac and Linux for high end (and a few not so high end) engineering, simulation, CAD and Math applications for many years.

you can get a Quad core mac mini i7 for ~850 bucks, grab 2 of them and connect together, control as small cluster from terminal/console. theres your 1700USD. can run linux, Mac OS, or Windows all natively. i'm sure one could put something similar together on a linux homebake PC for a bit less, but probably not all that much less. mountain lion is quite an impressive OS these days IMO. those that call Macs toys in this area simply havent looked closely enough, as they have been used for scientific modelling at universities for quite a while.

what I mean by running from terminal is its launched from there, it may still spawn application windows and display them via openGL or whatever. but these days a well trimmed/compiled application doesnt really necessitate doing this.
 
Last edited:
AX tech editor
Joined 2002
Paid Member
I think it's a little more effective than "Jumping to Conclusions" or "Dodging the Issues", but not as good as "Beating Around the Bush".

Dale

... flying off the handle ? ;)

Re: thinking: I once had a manager, who saw me siting at my desk, hands folded behind my head, ask what I was doing. "Thinking".
Said he: "I 'm not paying you to think, I am paying you to work". And I swear, he was dead serious.

jan
 
diyAudio Member RIP
Joined 2005
I think it's a little more effective than "Jumping to Conclusions" or "Dodging the Issues", but not as good as "Beating Around the Bush".

Dale
But the defense for tax evasion "I was just playing dodge-ball" doesn't work. Trust me.

Indoor golf is also a great activity, given a supply, and adequate knowledge, of cooperative partners. But beware of premature expostulations.

And come twice a year, we are all able to fall forward and spring back, which also fits in nicely with running late and rising early.
 
diyAudio Member RIP
Joined 2005
Re: thinking: I once had a manager, who saw me siting at my desk, hands folded behind my head, ask what I was doing. "Thinking".
Said he: "I 'm not paying you to think, I am paying you to work". And I swear, he was dead serious.

jan
It's generally dangerous to argue with managers. Once I was in a roomful of people who received a lecture from an executive, one who had recently taken over the group in a corporate shakeup, and he told us that our competition outside the US were accustomed to paying engineers of order $10k per annum. I was barely able to suppress a heckle, to the effect that someone like him would pull down perhaps all of $20k, less if based on merit.

He didn't last all that long though. I knew his days were numbered when he made the mistake of introducing the Chairman and CEO thusly: "And now I'd like to introduce ______, currently the Chairman and CEO of ____." This was not missed by the august personage, and a wave of cringing passed over the multitudes. I think the introducer lasted about a year after that.
 
Disabled Account
Joined 2004
Quite possibly that using old data is the wrong thing to use as an example of the PC short's-comings for engineering use. Especially if you never have been exposed to supercomputer operation in your work invironment. But its the home PC I am talking about here as a comparison.

Well I have been exposed to the dated examples. Used an LSI11 in 1970 in a class project to run the sieve of Eratosthenes. Matlab (running as a math interpreter) on my laptop is so much faster I would not know how to compare. If I compiled it in MinGW (gcc for Windows) who knows. BTW you should benchmark SoX or BruteFIR on a PC against dedicated DSP real time sound processing, you might be surprised.

Our first system here was a time shared VAX (circa 1980), even at 4AM with almost the whole thing to myself it couldn't hold a candle to an off the shelf Dell Linux workstation (<$1000).

Comparing 1.2 Bil, 8M Watt super computers is not quite fair. In a time shared environment you have a 1000 or so users whose jobs have NOTHING to do with eachother.

As somene mentioned the PowerPC/x86 might make a better performance vs architecture comparison. The advantage is elusory usually erased in the next generation.

Another thing not mentioned is that 1500W at the wall is a drop dead limit per box. This is true for portable medical equipment also. Anything over that has no market to individual users, pro or not.
 
It takes a bit more to make a cluster than just connecting 2 (or more) machines together. (Cluster software and applications which are able to distribute their tasks on cluster nodes).

yes, did I not give examples of the type of applications? math, visualisation, scientific modelling are all good candidates for having the capability for distributed tasks. the type of applications that have been under discussion for the last 10 pages or so. also they have Thunderbolt connections, not sure if the latest ones are still limited to 2 channels as mine is, but regardless thats still 20GB/s each way. it supports PCIe and display port as standard on the thunderbolt bus, which would probably allow the connection of an external graphics card as another node and use as GPU for math.

so does anyone here really need more than that? ie 2 x i7 quads, internal GPUs in each machine, thunderbolt link between them and an external GPU node daisy chained off each machine
 
Last edited:
Having admittedly not read much of the recent discussion can I suggest that for diy software development python with iPython, scipy, numpy and matplotlib provides a very powerful set of scientific computing tools and iPython with pyzmq allows for powerful cluster computation and can run on linux/windoze/mac as in archtecture agnostic.

I'd also suggest that arm processors will come to the desktop faster than people might currently realise and intel/microsoft may well be left in tears over that situation!
 
umm, read your link (first one), i'm at a loss to see what clashes with what I said...

Mac can run Linux natively, can run mac os natively, can run windows natively. there is nothing preventing a cluster from being realised with the small system I described. no more than any other small collection of CPUs anyway.

read second link (skimmed) still nothing, didnt read it in detail because it includes some tech info beyond the scope and beyond my care factor =)
 
Last edited:
umm, read your link (first one), i'm at a loss to see what clashes with what I said...

Mac can run Linux natively, can run mac os natively, can run windows natively. there is nothing preventing a cluster from being realised with the small system I described. no more than any other small collection of CPUs anyway.

read second link (skimmed) still nothing, didnt read it in detail because it includes some tech info beyond the scope and beyond my care factor =)


I think his point was simply this:

The software config needed to realise a fully functional cluster is far more complicated than hardware+OS.
 
I think his point was simply this:

The software config needed to realise a fully functional cluster is far more complicated than hardware+OS.

sure, but did that point need to be made? of course it is.

you cant just connect an external graphics card, or group of graphics cards together and ask them to act as processors either but thats been discussed at length here. pretty much everything under discussion assumes some level of customisation of the environment. as do the basic simulation tools need customisation for the task at hand so the routines run efficiently

the need or desire to build something like this in the first place kind of assumes that level of knowledge.
 
Last edited:
sure, but did that point need to be made? of course it is

anyone that has been through the pain of trying to set up multithreaded application that shares processes in a distributed fashion could easily get offended by trivialising the matter ;) frankly cluster computing is in dire need of commoditisation. Amazon, Google et al have gone a long way towards making it affordable, its certainly not something any bloke (even with a reasonable level of general computer knowledge) could go and configure and do something productive with inside of an afternoon. Which is where the idea really starts to lose its shine at the moment.
 
trivialise? perhaps you should read the last 10 pages at the applications under discussion. there is nothing trivial about the applications or people in the discussion

I sure dont have the faculty or resources/time to do it, Maya already goes some way towards doing it out of the box for its rendering farms, but thats very application specific and that application costs several times the amount of the hardware system I laid out...per seat