Using The Slide Rule In Electronic Technology

rif

Member
Joined 2003
Paid Member
Yep, my dad showed me how two ordinary wooden rulers (with linear scales in inches) can perform addition and subtraction, while the two pieces of a slide rule (with log scales) can perform multiplication and division. It's a moment I'll always remember.
I wonder if that's how modern CPUs multiply and divide. Just have a long table of logs, do a few lookups, done. Something tells me that's not the most efficient method however
 
You may have a point there.

The computer at Universidad Católica de Buenos Aires when I started studying Engineering in 1969 was an IBM 1620.

It was nicknamed "CADET" .... meaning: Can´t Add, Don´t Even Try" :eek:

And how did it manage to add when required?

CADET IBM 1620.gif
 
Member
Joined 2011
Paid Member
I've been a designer on a 64 bit CMOS microprocessor and I assure you that on billion transistor CPUs, multiplication and division are performed by hardware logic gates (Wallace Tree multiplier) and (Sweeney-Roberts-Todd divider). Definitely not A*B = exp( log(A) + log(B) ). I recommend that you calculate the approximate size of the lookup tables necessary to perform multiplication of two 64-bit integers: how many rows in each table, and how many bits per row. A two dimensional array of hardware adders is MUCH smaller and more importantly, much faster.

The log trick to finesse away multiplication, became obsolete as soon as TRW and Weitek released standalone hardware multiplier chips, in the late 70s and early 80s.
 

PRR

Member
Joined 2003
Paid Member
how modern CPUs multiply and divide
Logs are not precise (or are too large to contemplate). And historically log tables are full of errors.

There is a trivial (not cheap) circuit to add binary numbers. And a notation to subtract binary numbers the same way.

The children-books said multiplication is repeated addition. True but impractical. Binary multiplication is trivial at the bit level, just bit-shifts, then those bits are summed.
"In binary encoding each long number is multiplied by one digit (either 0 or 1), and that is much easier than in decimal, as the product by 0 or 1 is just 0 or the same number." "..just shifts and adds." https://en.wikipedia.org/wiki/Binary_multiplier

Faster schemes exist to reduce the money-motion in the simple schemes. At today's gate prices, full combinational logic is possible.
 

rif

Member
Joined 2003
Paid Member
I've been a designer on a 64 bit CMOS microprocessor and I assure you that on billion transistor CPUs, multiplication and division are performed by hardware logic gates (Wallace Tree multiplier) and (Sweeney-Roberts-Todd divider). Definitely not A*B = exp( log(A) + log(B) ). I recommend that you calculate the approximate size of the lookup tables necessary to perform multiplication of two 64-bit integers: how many rows in each table, and how many bits per row. A two dimensional array of hardware adders is MUCH smaller and more importantly, much faster.

The log trick to finesse away multiplication, became obsolete as soon as TRW and Weitek released standalone hardware multiplier chips, in the late 70s and early 80s.
Makes sense.

How about things like trigonometric functions? Do they take the usual mathematical expansion of SIN(x) = (x - x^3/3! + x^5/5! ....), figure out how many terms are needed for a given accuracy, then just compute the sum?
 
Logs are not precise (or are too large to contemplate). And historically log tables are full of errors.

There is a trivial (not cheap) circuit to add binary numbers. And a notation to subtract binary numbers the same way.

The children-books said multiplication is repeated addition. True but impractical. Binary multiplication is trivial at the bit level, just bit-shifts, then those bits are summed.
"In binary encoding each long number is multiplied by one digit (either 0 or 1), and that is much easier than in decimal, as the product by 0 or 1 is just 0 or the same number." "..just shifts and adds." https://en.wikipedia.org/wiki/Binary_multiplier

Faster schemes exist to reduce the money-motion in the simple schemes. At today's gate prices, full combinational logic is possible.
Ahh! Log tables. Once when preparing a talk to fellow engineers I had an opportunity to mention log tables as a quaint way of doing maths. I desperately searched for my old 4 fig logs or book of 5 figs to illustrate the point. Could not find them, collecting dust somewhere in the loft. So I generated a page --- using Excel!
 
  • Like
Reactions: 1 user
"Programming" Slide Rule For Resonance Problems
In the era BC (Before Calculators), my little trick for R/C/L related frequency calculations was remembering that 1/(2 Pi) is almost exactly equal to 0.16.

For audio RC filters, I also memorized the same number with the decimal place shifted for convenience.

If working in hertz, ohms, and microfarads, the constant is 160,000. For example, f = 160,000/RC. So an 8 ohm speaker with a series 1000uF cap would have a (-3 dB) frequency of 160,000/(8 x 1000), or 20 Hz.

(Driving a speaker through a big fat electrolytic cap to block DC was an awful hack typical for that era of transistor audio amplifiers.)

If working in kilohertz, nanofarads, and kilo ohms, the constant is 160. For example, f = 160/RC. If your op-amp feedback loop has a 10k resistor with a 10nF cap across it, its impedance drops by 3 dB at 160/(10 x 10) kilohertz, which is to say, at 1.6 kHz.

That made a lot of filter-related calculations into very quick ones. You didn't even need a slide-rule, or the back of an envelope, you could just do them in your head.

Accuracy? Well, the accuracy of this simple approximate formula is better than the tolerances of the caps and resistors I could buy at the time. :)

-Gnobuddy
 
  • Like
Reactions: 1 users
In the era BC (Before Calculators), my little trick for R/C/L related frequency calculations was remembering that 1/(2 Pi) is almost exactly equal to 0.16.

For audio RC filters, I also memorized the same number with the decimal place shifted for convenience.

If working in hertz, ohms, and microfarads, the constant is 160,000. For example, f = 160,000/RC. So an 8 ohm speaker with a series 1000uF cap would have a (-3 dB) frequency of 160,000/(8 x 1000), or 20 Hz.

(Driving a speaker through a big fat electrolytic cap to block DC was an awful hack typical for that era of transistor audio amplifiers.)

If working in kilohertz, nanofarads, and kilo ohms, the constant is 160. For example, f = 160/RC. If your op-amp feedback loop has a 10k resistor with a 10nF cap across it, its impedance drops by 3 dB at 160/(10 x 10) kilohertz, which is to say, at 1.6 kHz.

That made a lot of filter-related calculations into very quick ones. You didn't even need a slide-rule, or the back of an envelope, you could just do them in your head.

Accuracy? Well, the accuracy of this simple approximate formula is better than the tolerances of the caps and resistors I could buy at the time. :)

-Gnobuddy
I still do it that way, when casually looking at some new circuit, say as posted here.
Doesn´t stop me from searching for higher precision later, of course.
 
When I was a kid doing this stuff, my R/L/C/Freq calculations I usually started with a nomogram.In the LC case, three scales - C, L. Freq Pick one and draw a line across the three scales Any two numbers gets you the third. If I start with 20MHz say, I can draw a line from there across the other two scales. The scales run opposite, so I can angle one way to larger caps and smaller indictors or the other way for smaller caps and larger inductors.

https://www.industrial-electronics.com/images/vom_jaski_1003.jpg
 
  • Like
Reactions: 1 user
Yes, I remember nomograms. Slide rules, early calculators and punched cards are all things that my late older brother has worked with but that I'm too young for, but I do remember very thick catalogues of some German company with nomograms. I think they were also in small booklets from a company that published electronics magazines, De Muiderkring.