Cleve’s Corner: Cleve Moler on Mathematics and Computing

The Ardent Titan, part 1 3

Posted by Cleve Moler,

Even though I am one of the founders of the MathWorks, I only acted as an advisor to the company for its first five years. During that time, from 1985 to 1989, I was trying my luck with two Silicon Valley computer startup companies. Both enterprises failed as businesses, but the experience taught me a great deal about the computer industry, and influenced how I viewed the eventual development of MATLAB. The second startup was known as Ardent Computer for most of its brief existence. We built a machine known as the Titan.

Contents

The Actual Silicon Valley Beckons

I was working for Intel Scientific Computers in Beaverton, Oregon, in the spring of 1987 when I got a surprise call from Gordon Bell. I did not know Bell personally, but I did know that he had become famous as one of the architects of Digital Equipments Corporation's computers, including the enormously successful VAX. When he called me, he was VP of Engineering of what was temporarily called Dana Computer, a new startup in Mountain View, California, in the real Silicon Valley. Dana would later change its name to Ardent.

Ardent's Plans

The founders of Dana had impressive entrepreneurial and technical credentials. Allen Michels, the CEO, Ben Wegbreit, and Steve Blank had been among the principals in Convergent Technologies, a successful venture in the early days of microcomputers. They had sold Convergent to Unisys and were anxious to create another startup.

Their goal was to make a new class of computer -- a personal graphics supercomputer -- a machine that would provide a substantial fraction of supercomputer performance, together with integrated high-quality graphics, at a price that would be accessible for a single user.

They were assembling a stellar list of technologists. In addition to Gordon Bell, there were established senior people from other computer companies and junior people who had not yet become so well known. Jonathan Rubinstein, Glen Miranker, Bill Worley, Steve Johnson, Randy Allen, John Sanguinetti, Greg Walsh. If the new company had been in a university, it would have been one of the top computer engineering departments in the country.

Count Me In

In 1987, even more than today, success in scientific computing was measured by the Linpack benchmark. Dana's stated goal was to build a computer that could achieve six megaflops with compiled code on a 100-by-100 matrix. They wanted to hire Jack Dongarra, who was Mr. Linpack, but he said no and suggested me instead. That prompted Gordon Bell's phone call.

The host on my visit to Mountain View was Steve Johnson, who was heading Dana's compiler development. Steve had come from Bell Labs where he had been one of the key developers of the original Unix. He had written the Portable C Compiler, yacc, spell, Lex, and Lint. He would become my good friend and years later would join MathWorks, where, among other things, he would write M-Lint, which has become the MATLAB Code Analyzer.

I was hired to head a group that would write math libraries, develop demos, run customer benchmarks, and the like. My wife, young daughter, and I moved down from Oregon. I hired six people to join my group.

Black Velvet Poster

Steve Blank was, and is, the ultimate marketeer. Today he teaches entrepreneurship and marketing at Stanford and Berkeley. See SteveBlank.com. One of his first projects at Ardent was to create this poster. We called it the Black Velvet poster because it resembles the art work popular back then that featured Elvis Presley guitars and Bengal tigers on black velvet backgrounds. It is a log-log plot of performance on the Linpack Benchmark versus price. Steve has a higher resolution image on his web page.

Workstations are at the lower left in the violet box, minicomputers are in the center in the green box, and supercomputers are at the upper right in the blue box. All of the computers shown on this chart -- except the Titans -- cost roughly \$100,000 per Linpack megaflop. The new kids on the block are the Titans, shown in the gold box. With our single processor machine, which is the first gold ball, we were planning to offer six megaflops for under \$100K. That's an order of magnitude improvement in price/performance over all the other machines.

The Titans were multiprocessors. You could have two or four of them. These are the other two gold balls, where Steve's graph is predicting something over ten megaflops. We never achieved that, and I seriously doubt that we could have on a 100-by-100 matrix. Eventually the rules for the Linpack Benchmark were relaxed to allow the larger matrices that are necessary to achieve good utilization on multiprocessors.

The Machine

There is a good description of the Titan in a web page of the Rhode Island Computer Museum, Ardent Titan Graphics Supercomputer. There was also a comprehensive technical paper by five of the principal Titan engineers in the journal Computer in 1988. The link is below in Further Reading.

The CPU of the Titan was a MIPS R2000 microprocessor. MIPS was then a new computer company that had been founded by Stanford EE and CS Professor John Hennessy and others. Today, Hennessy is Stanford's President. The MIPS chip was one of the first commercial implementations of RISC, reduced instruction set computing.

The heart of Titan's scientific computing capabilities was the custom vector unit. It handled all of the floating point arithmetic, including the scalar operations. There were three independent arithmetic functional units: ALU, multiplier, and divider, with an aggregate peak computation rate of 16 megaflops. The vector register was large and flexible. It held 8192 double precision numbers and could be regarded as anything from one vector of length 8192 to 32 vectors of length 256.

DRAMs, Dynamic Random Access Memory chips were just becoming available in quantities at acceptable prices. We could populate a huge circuit board with enough 4-megabit DRAMS to provide 128 megabytes. The board, which was as big as the top of a coffee table, weighed perhaps 20 pounds and cost several tens of thousands of dollars. You could have up to four boards in a system for a total of half a gigabyte of memory. That was considered a lot of memory at the time. By way of comparison, today MathWorks gives away USB memory sticks with a full gig of storage as freebees at our trade show booth.

Kubota

Kubota Corporation is a large industrial corporation, established in 1890, based in Osaka, Japan. They are best known in the USA for their tractors, but they manufacture lots of other stuff, including irrigation pipe and air conditioning equipment. They are listed among the top 100 companies in Japan by the Tokyo Stock Exchange.

In 1987, Kubota was looking to get involved in high tech and Silicon Valley. So they provided the second round of venture capital funding for Ardent and invested in MIPS. Kubota also built, or at least started to build, a highly automated plant in Japan to manufacture the Ardent Titan. I don't know if the plant was ever finished. We certainly never sold enough machines to cause the plant to go into operation.

Six Megaflops

In the 1980's there was a strong correlation between performance on the Linpack benchmark and many larger, commercially important technical applications. Dongarra's May 1986 report listed 16 computers capable of six or more megaflops on the benchmark. All of them were mainframes or supercomputers and all but two were vector machines. So aiming to get six megaflops out of a single user machine was a compelling goal.

When the first working vector chips came back from the foundry late one evening just before Christmas in 1988, those of us still in the office gathered around a machine to see how fast it actually was. I was responsible for the timing. Solving a 100-by-100 system on a six megaflop machine would take about a tenth of a second, so I had to run it several times. I measured 6.3 megaflops! We had made it. We were on our way. The CFO passed out crisp $100 bills to everyone in the building.

Stellar

But lurking 2,686 miles to the east of Mountain View, California, in Waltham, Massachusetts, was Stellar Computer Corporation, founded by William Poduska. Poduska was a darling of the "Massachusetts Miracle", a lynchpin of Governor Michael Dukakis' unsuccessful campaign for president in 1988. Poduska had previously founded Prime Computer and Apollo Computer, which were both tremendous successes. (MathWorks recently acquired a landmark building in Natick that has previously been occupied by Prime Computer, as well as Carling Black Label and Boston Scientific. The street behind the building is named Prime Parkway.)

Poduska apparently had now had the same idea as Al Michels. We weren't sure of the details, but we knew that Stellar was developing a machine that would compete head on with ours, and that they would probably announce it at about the same time.

One Sunday before the machines were announced, the Boston Globe ran a long feature piece about the two companies. It was a East Coast versus West Coast thing. They pointed out Poduska's earlier successful startups, his support of the Boston Ballet and Boston Lyric Opera, and his membership on their boards. They called him a White Knight. They didn't know as much about Michels. They said that Ardent Computer was a "Silicon Valley boot camp".

The battle was underway.

Graphics, Dore and MATLAB

The Titan was more than just a fast vector computer. It also had powerful, integrated graphics hardware, supported by sophisticated graphics software known as Dore, for Dynamic Object Rendering Environment. Running on top of all of this was MATLAB, which I will tell you about in part two.

Further Reading

Tom Diede, Carl F. Hagenmaier, Glen S. Miranker, Jonathan J. Rubinsein, William S. Worley, Jr., The Titan Graphics Supercomputer Architecture, Computer, IEEE Computer Society, September, 1988, vol. 21, no. 9, pp. 13-28, <http://doi.ieeecomputersociety.org/10.1109/2.14344>


Get the MATLAB code

Published with MATLAB® R2013b

3 CommentsOldest to Newest

Zoom in on Blank’s hi-res image. The outlier green ball is the IBM 3090, IBM’s largest mainframe at the time of these measurements. Expensive, but relatively slow on Linpack. It didn’t have a vector unit. So IBM introduced the 3094, which did.

The blue ball just above the green outlier says “IBM 3090 w/VPF”. That puts the 3090 into the supercomputer category. “VPF” means either “vector facility”, which was hardware, or “vector Fortran”, which was software, or both. I don’t remember which.

These postings are the author's and don't necessarily represent the opinions of MathWorks.