The 13.5 Million Core Computer – Hackaday
Having a dual- or quad-core CPU is not very exotic these days and CPUs with 12 or even 16 cores aren’t that rare. The Andromeda from Cerebras is a supercomputer with 13.5 million cores. The company claims it is one of the largest AI supercomputers ever built (but not the largest) and can perform 120 Petaflops of “dense compute.”
We aren’t sure about the methodology, but they also claim more than one exaflop of “AI computing.” The computer has a fabric backplane that can handle 96.8 terabits per second between nodes. According to a post on Extreme Tech, the core technology is a 3-plane wafer processor, WSE-2. One plane is for communications, one holds 40 GB of static RAM, and the math plane has 850,000 independent cores and 3.4 million floating point units.
The data is sent to the cores and collected by a bank of 64-core AMD EPYC 3 processors. Andromeda is optimized to handle sparse matrix computations. The company claims that the performance scales “almost linearly.” That is, as you double the number of cores used, you roughly half the total run time.
The machine is available for remote use and cost about $35 million to build. Since it uses 500 kW at peak run times, it isn’t free to operate, either. Extreme Tech notes that the Frontier computer at Oak Ridge National Labs is both larger and more precise, but it cost $600 million, so you’d expect it to be more capable.
Most homebrew “supercomputers” we see are more for learning how to work with clusters than trying to hit this sort of performance. Of course, if you have a modern graphics card, OpenCL and CUDA will let you do some of this, too, but at a much lesser scale.
“40 GB of static RAM”
My brain melted just thinking about this much SRAM.
This oldie can remember a time when the Cray-1 seemed this exotic, this expensive, this power-hungry.
Now I carry a device around in my pocket that is orders of magnitude faster, has orders of magnitude more memory and runs off a battery.
I wonder if I will ever carry around something more powerful than Andromeda in my pocket?
Not on silicon, if I had to guess
What’s logically next? Literally diamond style carbon? A “living” solid state build? Some more exotic combination rather than primarily elements? There are already structural limitations of extremely atomically close CPUs and GPUs and we could go more 3D with them but only to a point and that has it’s own limitations.
The next question is what is it going to be able do specifically? Most supercomputers are extremely good at very specific things. Most of which we wouldn’t need to handle daily or simple tasks. Heck, a basic ARM can now at least do a pretty good job of things like turning on a light bulb.
Probably not in your pocket, but you do carry a more complex computer in your skull. 😉
Have to reboot every night.
Is it a full reboot though? I’d say its more like hibernation mode, or maybe sleep mode. 😴😁
Nothing in my skull can compute a square root to to 10,000 places in a millisecond.
But it can solve a CAPTCHA.
I have 16GB in my laptop 😀
If they are in your laptop, then that isn’t static RAM because you can move it around 😛
The RAM in your laptop is SRAM not DRAM. Big difference.
Other way around, no?
Not ‘static’ RAM though 🙂 . I too have 16GB ‘dynamic’ DD4 RAM in my laptop…. And 64GB in my home workstation…. 16GB seems to be the sweet spot now for most machines whether Windoze or Linux. I remember the time I ‘upgraded’ from 64K to 256K of RAM in my DEC Rainbow… I also remember when a 10MB HDD was a ‘want’ but to expensive… so stuck with floppies…. Those were the days.
Skipping 3 steps…
Luxury!
We had to thread our magnetic core by hand. After drawing the fine wire in the Hull makerspace forge from discarded Lucas electric parts found on the side of the road.
Those were the day! Indeed…
Yeah, but you can only access it 64 bits at a time.
Does it run Crysis?
Old school question does it run “Frogger ?”
Does it run DOOM?
There you go, that’s the new metric! Even notepad runs doom!
Whatever you do, do not let Captain Kirk talk to it. It will melt down!
The question is have they called the computer Andromeda because they received a radio signal detailing the construction plans from the Andromeda galaxy?
If it can grow Julie Christie in a tank I’m all in favour…
I’d call “having a dual core” pretty exotic these days, when most cheap laptop come with 4+ cores. Steam has 2 cores at 4 cores is more a marketing gimmick…)
Dual core is not really that exotic, it isn’t used much now but it isn’t exotic. What are you on about steam anyway? Are you on about the steam deck? If so that has 4 cores and 8 threads.
If you mean that you don’t need 4 cores and only need 2 and it is just a marketing gimmick then you are just wrong. There is a huge difference in performance with extra cores and lots of applications and games now are able to make use of multiple cores.
But will it run Do..
No wait, will it run an AI that can conceptualize and code Doom from scratch based on an oil painting of John Romero? 😀
It runs bioinformatic genetic engineering simulations to help create demonic abominations to set free in the real-life occult space marine setting of your choosing.
AI Computer? Can do anything useful?
“Here I am, brain the size of a planet, and they ask me to take you to the bridge. Call that job satisfaction? ’cause I don’t.”
“It’s the people you meet in this job that really get you down.”
The A in AI stands for Abysmal
I guess it depends on how you define a core. e.g. if you count GPU cores as cores, then the IBM Summit has 23.6 million GPU cores (plus a paltry handful of 200k POWER9 CPU cores). If you want to complain that a GPU core is not a ‘real’ general purpose CPU core, then neither are the ‘cores’ in Cerberas’ WS wafers.
Certain fruit company should have done this, then we could have joked about “apple cores” all day long.
Which gpu should pair with this to get max framerates in cyberpunk?
I went into software because I was on a team that built a supercomputer, and while it was big and sexy, and fun to get pictures next to the racks of servers doing a burn-in, it struck me that most compute power in the world sits idle because it lacks software to really push it. I now ruthlessly push big iron to automate things that waste humanity. While at the same time lamenting that the bigger and more connected I make it, the more time I have to waste with layers of obfuscation, virtualiztion, and containerizing so a bug in a logger (actually many bugs at every layer) doesn’t make me violate the trust my customers put in us.
supercomputers are inspiring in their potential, but I usually find the work assigned to them tends to lack any luster at all.
Which such a computer, you can’t be afraid by a “core dump” message … still 13.499.999 cores remaining … fine … ^^
Makes sense, dense compute, the new frontier for artificial stupidity.
I misunderstood the article’s title, and now I’m wondering what the largest capacity magnetic core memory was.
Here’s a link which looks interesting.
https://ethw.org/Magnetic-Core_Memory
I couldn’t find a definitive answer. The wikipedia entry for magnetic-core memory mentions nothing larger than 256K 36 bit words — about 9 megabits in a huge cabinet.
If you consider ferroelectric RAM chips to be magnetic-core memory, then 16 megabit chips are available from Infineon.
Please be kind and respectful to help make the comments section excellent. (Comment Policy)
This site uses Akismet to reduce spam. Learn how your comment data is processed.
By using our website and services, you expressly agree to the placement of our performance, functionality and advertising cookies. Learn more