John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
I'm just wondering if more area is wasted to the cut zone than the actual active regions.

For a die that's .012 inch by .012 inch, it's probably close. The other option at the time was laser cut, but for military hybrids that was a no-no. It would fail PIND testing (particle impact noise detection) as the cut would have small bits of slag from the cut (and melt) process.

actually it is 297,209. First order ignoring edge effects, process test die etc.
Whoa, so the PDP 11/04 and 11/23 would rollover 5 times..

A quarter million diodes...lifetime buy for JC, no?

John
 
I worked at the research fab at llnl. Don't think the general use dicing saws we had would be to keen on cutting glowing diodes, but they might be better equipped to tell you who might. And stays inside DOE.

The guy here is asking for recommendations as to which saw..beats me of course.

Do you have any names there (that are still there) that I could ask for rec's?

If so, can you pm me them?

Thanks,

In return, I'll send you a batch of antiprotons. You'll recognize the envelope, it'll be the one with a big hole in the middle.

John
 
AFAIK they have femto-second lasers with no melt now. We successfully proto-typed laser drilled alignment keys directly in die for connectorless 10G fiber-optics.

Hmm..then I guess the worst case would be purchase of new HEPA filters.

Except, where do you send activated HEPA filters?

edit: Although, the best part is I do not care about the vaporized material re-depositing on the gold on top, as I'm not worried about wirebond integrity, I'll use a bed of nails. And, I've no concern with reverse breakdown, only forward voltage at room and cold. And, I can speak to the vendor to determine how deep I really have to go to isolate the junctions.

John
 
Last edited:
None.

It was funny, one of the girls was probing a disc of them, the final good dice count was weird, I think it was a negative number.

The auto probe system computer was 16 bit. The total was over 32K.

John

Reminds me of a time when I was working at a law firm in thevIT department . One f the ladies in our group was working on a project transferring documents from some old WORM media to hard drives. The got her a new box with a couple of honking big hard drives (for the time) and a WORM drive, running Windows 95. Her strategy was to dump all the files into a directory before indexing them and sending them wherever. There were several million files. After weeks of copying she asked me for help because she could only see about 32000 of them. The maximum number of entries in a directory on a FAT filesystem is a signed 16 bit number. Windows happily let her add more files, but just overwrote the directory entries.
 

Yeah... A Unix system running a decent filesystem would have done two things differently: first the FILEMAX number would have been 32 bits (possibly unsigned) so she wouldn't have had that problem (though a directory with a couple of million files would have introduced its own problems), and second the copy operation would have failed with an intelligible error code.

Speaking of filesystems and large directories, I had a slightly interesting puzzle at work lately. We maintain this system that transfers files, inbound and outbound, millions per month. The system has job definitions that tell it to pick up a file from some place and put it some other place. When it does so it gets the source file (which might be on a client's FTP or sftp server, or a FTP/sftp dropsite we host where clients drop files, or for outbound files NFS filesystem locations, internal FTP/sftp servers, mainframes, etc) and copies the file to a "staging" area, which is actually a round-robin collection of NFS shares. The directory structure is like .../blah/job_name/request_id/run_instance_id/step_1/. Once the source file is successfully collected it is copied to a .../step_2/ directory. The reason for this is that the job might manipulate the file in some way (compression/uncompressing, encryption/decryption, unixtodos/dostounix, etc) and we want to preserve the source file and the transformed file before delivery. However, the vast majority of files do not require any manipulation. Later an automated job walks the staging fliesystems removing files and empty directories older than some predefined retention period.

So a couple of years ago they were running out of disk space on the staging shares. They added capacity but some bright bunny realized that we were staging two copies of identical files, millions of them. So why not just use symbolic links in the step_2 directories if the file was unmolested? Good idea, and the engineers made the change, but nobody looked at the cleanup job. Basically when it hit a filesystem node it said "is this a directory? If so is it empty? If so delete it. If not recurse into it. If not a directory, then is it a file? If so, is it old enough to delete? If so delete, else move on.". This worked great until symlinks were introduced , when suddenly both tests failed (neither a directory nor a file) and the script ignored them. So the files the symlink pointed to were deleted, but not the links. That left the symbolic link, the step_2 directory, it's parent directory, and it's grandparent directory behind. Space consumed was negligible, but that's 4 inodes each time. We started to run out of inodes! On top of that the job_id directories started to have very large numbers of subdirectories. In one case I found over 280,000 subdirectories on each of 4 fliesystems. Now the job that cleans those filesystems had to walk all those directories, and was taking longer and longer to complete, so files were hanging around longer and using up disk space because the cleanup job didn't get to them.

I told the engineering team what was happening, told them how to fix their code, but told them to wait on the fix while I and my team did some hygiene over several weekends to remove tens of millions of dead symbolic links and directories. We got inode usage down to single digits from around 70%, and now a cleanup job that used to take >5days completes in a couple of hours. This was a big win for our operations team.

So remember kids: it's all well and good to monitor disk block usage, but watch your inodes too!

PS: I believe I deserve a medal for typing that and fixing at least one autocorrect per line (including this one).
 
I want to talk about improving your audio system in small ways, like cleaning your connectors, treating your CDs SACD's etc with a better cleaner, etc. I realize that most here will never invest in real hi end products, and I don't blame you. We all have a limited income (not like some hi enders) and we want the best 'bang for the buck'. It's true that the net worth of my own audio playback system is worth $50,000 or so, but that is because I design my own electronics, I buy used, and people give me samples of their products. I don't expect most of the rest of you to be able to keep up, but that does not stop you from having a 'better' hi fi system than the 'herd' of accountants, engineers, etc who don't think much about it.
 
Member
Joined 2011
Paid Member
John when you think about improving your audio system in small ways, does that include changing the way store-bought commercial audio equipment connects to the AC mains? Different AC power cables, adding surge protectors, power line filters, power line regenerators, gold plated mains sockets, etc?
 
GE use to be one of the larger suppliers of silicone two component systems but they bailed out of that industry. I wouldn't be surprised if some of those Dow compounds aren't old GE silicone formulations.

GE's silicones were spun off as Momentive.

The ones being described sound like materials with a low level of cross-linking. For electronics, you usually want a Pt catalyst cure, as opposed to an RTV, so there's some vinyl sidechains.

Best producers out there are Nusil and Wacker. Dow Corning, Blue Star (used to be Rhodia), and Momentive are all top suppliers.
 
Mark, those are Hi End improvements that are more appropriate for true tweakers, rather than most audio listeners. This does not mean that I personally have not heard the difference in AC cables, power line filters, etc. but they tend to cost too much for people with a normal budget, unless tweaking is their primary hobby.
You have to start with a pretty good hi fi system that has been well maintained. This is most important, however, what is a real drawback is people 'trained' to be engineers, etc who think they know what something sounds like without actually trying it. This is where engineers primarily, fail to make great audio designers.
 
Status
Not open for further replies.