Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
High End CPUs – Intel vs. AMD (cpubenchmark.net)
192 points by bhouston on Aug 11, 2017 | hide | past | favorite | 95 comments


I don't mean to diminish the efforts of anyone involved, but I truly feel one man more or less moves the direction of the CPU industry: Jim Keller.

He, among others, invented x86_64 at AMD during its previous glory days. AMD dominated the competition. He led the Apple chips at A4, and Apple chips then and now dominate the mobile competition.

He came back to AMD and helped create Zen, to obvious results. Apparently he now works for Tesla.

In any event, this guy seems to have the Midas touch wrt CPUs, it's a shame there isn't more written about him or more importantly, by him.


The timeline looks about right, SA says he moved back to AMD in 2012[1], so figure 4 years to design a new core and get a few test runs in, then a year to ramp enough so you can sell said chip.

That being said, I think it has more to do with an overall culture change, the technical talent to build a good core has been at AMD this whole time. Jim Keller said as much back in 2014[2], and there is nothing special about Zen's architecture from a technical standpoint. What Zen is is a well executed conventional high performance CPU core.

By not splitting up resources like previous leadership at AMD, they were able to build a solid, competitive core. This type of investment takes 5+ years to come to fruition, from thought to test runs of silicon.

1 - https://semiaccurate.com/2012/08/01/apples-cpu-architect-jim...

2 - https://www.youtube.com/watch?v=SOTFE7sJY-Q


Well...I don't know the entire history, but only my memory...which admittedly is never an accurate source. At that time, it was stated Jim Keller was brought back to AMD specifically to work on something which became codenamed 'Zen', since AMD was getting pounded in their previous ventures(Bulldozer et al). I was excited then, then forgot, and then Ryzen was born.


Yea, he definitely came in at an interesting time, right as Lisa Su came to the fore, locking down the PS4 and Xbox One. Following this Lisa pared back the product lines so they weren't spreading their engineering talent thin, which AMD badly needed to do.

Remember that ARM Seattle core AMD announced in 2012[1], to be launched in 2014? Here we are in 2017, and its finally just becoming available in significant quantities...

Diversions like that were what was keeping AMD from putting enough engineers on the task of building a solid CPU core.

1 - https://semiaccurate.com/2012/10/30/amd-announces-an-arm64-s...


Truly brilliant for sure. Reminds me of Anders Hejlsberg who designed Typescript, Delphi, and C#.

Or Linus Torvalds with Linux, and just to prove he wasn't a one hit wonder designing Git.

I've concluded that what people fail to grasp about technology is that Einstein types still exist. There are rare engineers worth hundreds of times more than average, only this time they're well paid by their corporate masters. More like Picasso's really. Technology is a fusion of art and science, free from the normal restrictions of physics it's an engineering discipline that embraces art and elegance to a degree not seen in harder, realer disciplines.


Git was helped by Github, and being a requirement for Linux development.

Otherwise I might be enjoying Mercurial nowadays.


I AM enjoying mercurial these days. It's a joy to work with. For most git tasks I can just clone it using hg-git, and work with it just like that, and push it back to github for a PR/CI.


Let's not get ahead of ourselves there on git. _Have you tried using submodules?_ :P


I actually like submodules, maybe it suffers from the awkward-at-first perception people have about git in general.


I don't think submodules were designed by Linus.


Throw in Jobs. Original Mac (1984), iPod (iconic music players), iPhone (nuf said), iPad (launch of tablets). Musk, Edison, Bell. Really a lot of the world changing in tech is concentrated in bursts of productivity for a relative few individuals like Einstein's golden year. Of course many make significant one off contributions. But it is amazing how some just have this ability to strike gold consistently. Making it look almost routine.


Jobs was a Marketer of other peoples ideas... I love how you leave out Woz, and the Team of people that made the iPod/iPhone/iPad

Edison is renowned for taking credit for other peoples work, notably some of the work of Nikola Tesla who you also leave off your list and was one of the rare singular geniuses on the level of a Einstein

Elon Musk has a vision of his idealized world and the money to hire people that will attempt to make it happen

None of the people you list (aside from Einstein) are singular genius working on their own creating the ideas and products and I think we do a disservice to everyone when we say "Jobs invented the iPhone".

That is not to say their contributions to the projects are not important but I do believe there is some amount of Hero Worship and over inflation of their value/importance and an massive under appreciation for the 100's or 1000's of other people that actually make these ideas or products a reality


Funny, I was reading another HN thread where someone said:

"Pretty reliable rule of software is: if you think it's pointless you probably don't understand it well enough." https://news.ycombinator.com/item?id=14987634

I think that applies to your understanding of Steve Jobs. I don't think you understand the point of Steve Jobs well enough.

A product like the iPhone isn't special because of its ideas — science fiction authors had already envisioned products like this — it's special because it actually got made.

He had a vision for what the product should be. He was able to coalesce a set of human resources into making it a reality. He took raw ideas, found the best of these ideas, insist these best ideas still suck and need to be further improved. He made people, regardless of their particular discipline, do the best work they were capable of.

Calling him a marketer is a sad joke.


> Jobs was a Marketer of other peoples ideas

That's an oversimplification. He was a marketer, true, but he also was fanatically pushing people towards standards higher than anyone else in the industry.

If there was nothing special about Jobs, how can it be Apple came up with the Mac (a somewhat just-right version of the Lisa ideas), with the iPod, the iPhone, computers that actually look good (starting with the bondi-blue iMac, which he had little to do with)? To say he only could sell other people's ideas ignore how many of these people voluntarily worked for someone with the reputation of being an insufferable asshole and pull off miracle after miracle.


If I invent a fantastic new algorithm in a space people are hamfistedly competing in, does my manager get the credit for inventing the algorithm because he pushed me, or do I?

This type of thinking makes great engineers move into other non-engineering ventures.


If your manager drove you through several iterations, pointing where your previous attempts were flawed and guiding you to the One True Implementation, then yes, your manager deserves some credit for making you implement their idea. The implementation may be yours, as well as the flawed ones, but it's the manager's idea in the end. You deserve credit for making it possible, but so does your manager.

A good manager isn't a person who tells you what to do. A good manager will help you do the best you can do. A really good one will make you do what you didn't think you could.


Let's pretend this is a journal article, if he pointed out flaws but didn't really do much else, does he end up in the list of authors at the beginning of the paper or does he end up as a thank you at the end?

It seems in this case people like Jobs end up as first author.


Why didn't the people who were actually responsible form their own company then, but rather wait for Steve come along to assemble the team and take all the credit? I also assume he made more money than them, how did so many people stand for that?


Everybody deserves credit. Your manager could have told you to work on a different problem. Are you really a better engineer than me - working for a different company in a different problem space where that algorithm isn't needed? Is it something special about you that nobody else could develop it, or is it just the circumstances put you in a place where you couldn't help but develop it? So you developed a new algorithm, are you sure that if I had tried I wouldn't have developed a better one?


Let's pretend this is a journal article, if he pointed out flaws but didn't really do much else, does he end up in the list of authors at the beginning of the paper or does he end up as a thank you at the end?

It seems in this case people like Jobs end up as first author.

The rest is irrelevant. There might be several scientists far better than me, all that matters is that I got there first. My manager could also be irrelevant as I've had technically capable managers that didn't truly do anything except put me on a project, that they hadn't known beforehand if it was even possible to do, and tell me to do it with next to zero feedback throughout the entire process.


I believe Jobs was an "editor" too. He would direct talents into improvements and pick choices. He didn't just sell something made, he was the guy who asked for it. There was many ways to design a mp3 player, some good that amazing people could have made, but only a few that became iconic like the ipod.

ps: think about Next too, that's again a Jobs only venture. He assembled the team, designed the goals, drove the people. He's not "just a salesman", he brings value into the formula. Not to say that people like Tesla weren't screwed though.


I don't think Marketeer is as a name should be diminished. Truly and brilliant excellent Marketeers are very rare and amazing to watch.


No one is trying to diminish them. The thing is, brilliant marketeers already know how to be celebrated. But finding the scientists and engineers take extra effort to notice, because they aren't naturally "loud" about it.


Wiz was never involved with a lot of the more interesting bits of Apple, he gets a lot of deserved credit for his early work but also plenty of credit for things he never worked on.


Woz is the Mozart of 8-bit computer design. His design is surprising, even breathtaking at times, but the kind of magic he did then cannot be done these days of billions of transistors per chip.


He sure seems to be very talented but Ryzen also gained from a few important patents being finally expired (the one about SMT, for example)

https://scalibq.wordpress.com/2012/02/14/the-myth-of-cmt-clu...

http://www.anandtech.com/show/11170/the-amd-zen-and-ryzen-7-...


I remember how excited I was when I heard the announcement that Keller was to work on a new project at AMD a few years back.

And I can certainly relate to the latter point: As an electronic engineer that likes to dabble with IC design, I'd love to consume some of the wisdom that the man could impart.


Along with him was Dobberpuhl, who led the DEC Alpha team that Keller was on. Dobberpuhl founded SiByte, which is the company that Keller left AMD for.

They then went on together to work at P.A. Semi, which was acquired by Apple. Dobberpuhl retired there. Keller went to AMD, and now is at Tesla.


> one man more or less moves the direction of the CPU industry: Jim Keller.

I never heard of him, and now I'm glad I did. Thank you for posting that.

Here's a link to wikipedia's article on Jim Keller:

https://en.wikipedia.org/wiki/Jim_Keller_(engineer)


Wow! Hard to believe I would ever, ever, ever see AMD on the top of that list! Amazing! Forget about the price, just amazing to see AMD at the top, now factor in the price and wow Intel is screwed. Not because of this one release, but because they are getting pounded from all angles.

Intel's strategy of limiting the PCI throughput to hold GPU manufactures back is over, and these AMD cpu's are going to be paired super well with GPU makers, mostly Nvidia but of course their own ATI, which is really going to making Intel look sad soon. Boat loads of major players have been irked by Intel holding back PCI throughput, AMD let it rip, and "thread ripped" too!


> Intel's strategy of limiting the PCI throughput to hold GPU manufactures back is over, and these AMD cpu's are going to be paired super well with AMD, which is really making Intel look sad soon. Boat loads of major players have been irked by Intel holding back PCI throughput, AMD let it rip, and "thread ripped" too!

In what way does the increase in PCI lanes stop holding gpu manufacturers back? To the best of my knowledge a single GPU doesn't use more than 16 lanes.

Only multi-gpu systems are held back by the PCI lanes. So I can see threadripper being great for supercomputers, but I don't see how it affects Nvidia or AMDs GPU section.


The vast majority of Intel's lineup--at least for the last few years--is only 16 lanes.


I get that, but given that GPUs currently can't use (for physical connector) / don't benefit from more than 16 lanes. Unless you are using multiple GPUs or another PCI device, you would not benefit from more than 16 lanes. How does that hold GPU manufacturers back?


That sounds suspiciously like a chicken and egg problem. If the dominant CPU could only use 16 lanes, it makes sense to not put too much effort into supporting more if they can't be used effectively (other than being able to point at Intel as the problem and thus piss Intel off).


I don't think it is a chicken and egg problem. There are several of CPUs that support > 16 lanes, so a high-end GPU (paired with a high end CPU) could use more lanes if it needed it.

Currently it seems 8 lanes of PCIe 3.0 is not quite enough, but 16 lanes is plenty. So we'll need a few years to get up to utilizing 16 lanes completely.

At which point 4.0 may be ready.


> Unless you are using multiple GPUs or another PCI device

You mean like having a GPU and one or more PCIE NVMe storage-devices?


No, as in 4 GPUS and/or one or more PCIE NVMe storage devices or PCI based comms cards such as infiniband cards.


This. I've been trying to put together a small home lab xeon e3 system that can theoretically support nvme+gpu+infiniband. It's tough.

Typically you get something like 3 full length PCIe slotson a xeon e3 board (c236 chipset). BUT they can't all run full speed at 16x (16/16/16). You have to then choose between 16/8/8 and 16/16/0 due to the lack of PCI lanes.


Can you explain why you need 16/16/16? Wikipedia tells me that one x16 PCIe 4.0 connection has a capacity of 32GB/sec (x8 is 16GB/sec). A quick search shows NVMe drives with a sustained read or write speed of ~2-4 GB/sec. Equally for the GPU - you need to move a lot of data in/out of the GPU to saturate a x8 connection.


It's rare to run that sustained, usually it's bursty so the extra bandwidth is very much needed.


Some parallel computing problems are best solved by one CPU and a lot of GPUs. In some of these problems, inter-GPU communication is important. If you have each GPU using a single PCI lane, then the compute power can be bound by iner-GPU communication bandwidth.


Yeah, there are definitely cases for threadripper's PCI lanes. But a single GPU is not one of those cases.


16 lanes is required unless you are ready to give up 20% gaming performance on latest games. see the side by side comparison below.

https://www.youtube.com/watch?time_continue=2&v=8KOTiee5RqA

on top of that, many people want to stream their game video when they play their games on full speed, then a couple NVMe SSDs eat those lanes as well.


But if you're dropping 50% of the lanes and only getting a 20% speed drop, then it looks like you're not using those 16 lanes to capacity. Logically you'd expect closer to a 50% performance drop.


each Intel upgrade gives you 5-10% performance improvement. 20% is like 3 generations of Intel "innovation".


What I'm saying is that if the video card you were using had 16 lanes available and got X performance. Then you took away half the lanes available to graphics card, so now you have 8 lanes available, but performance has only decreased by 20%. Meaning that the game or application was only using a fraction of those extra lanes. So basically it wasn't fully exploiting the hardware available.


Second graphics cards are not that unusual, and by the time you then want an NVMe drive too, you're bumping up against limits.

More is definitely better here.


Multi-gpu systems are why graphics cards are scarce.


How is Intel screwed by this ? They just have to cut on their overexcessive margins due to lack of competition for a while.

If they are screwed it's by their own choice to abandon desktop and server CPUs to bet all on mobile.


Well Intel isn't screwed at all. Shareholders are just gonna be angry when Intel is forced to lower margins and they see a drop in profit.

Also, while Intel maybe winning the mobile (really only laptop) market now, ARM is slowly catching up. It's really only a matter of years before say someone like Apple chooses to ditch Intel in favor of using their own A processors in their computers.


How many GPUs get anywhere near saturating 16 lanes?


In CUDA applications, all the time.


1 user reported the score: https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+Threadrip...

Edit: just noticed the title was changed. Originally it said something about Ryzen beating Intel's processors.



Techradar's reporting...has had some issues. Initially the graphs on their threadripper review were misleading (link to imgur album [0]) and later changed. Anandtech[1] (prior discussion[2]) and Gamers Nexus[3] are much more reputable and go further in depth.

[0] https://imgur.com/a/SuzY9

[1] http://www.anandtech.com/show/11697/the-amd-ryzen-threadripp...

[2] https://news.ycombinator.com/item?id=14979151

[3] http://www.gamersnexus.net/hwreviews/3015-amd-threadripper-1...

Edit: clarification


yeah i was gonna say. give it a week or a month to even out and so we can get more samples before we crown anyone king


What makes PassMark a representative CPU benchmark? These one-company CPU benchmarks tend to be quite problematic (cf. GeekBench).

SPEC just came out with CPU2017. In SPEC there's at least a bunch of peer review, transparency and attention from academics.

Here are Anandtech's AMD vs Intel CPU2006 numbers: http://www.anandtech.com/show/11544/intel-skylake-ep-vs-amd-... http://www.anandtech.com/show/11544/intel-skylake-ep-vs-amd-...


> What makes PassMark a representative CPU benchmark?

Sadly nothing. PassMark is infamous for being not representative, and you can easily check that by looking at the position of the FX processors, which are rated way too high.

I run a pc hardware builder with a recommender function at its core, which means that the real core is the meta benchmark powering the recommender. It works by taking professional reviews of real life workloads, and I think it is a lot more realistic, though absolute ordering is of course not that easy. I recently made the benchmark results visible, there is one for games: https://www.pc-kombo.com/benchmark/games/cpu and one for applications: https://www.pc-kombo.com/benchmark/apps/cpu . Threadripper is in there and the big model leads the app benchmark, but I still need to add more benchmarks with the threadripper processors to make the result more certain.

At the beginning I even started with the passmark results, but had to move to better data when user complained the results were not good. They were right.


It's not authoritative. But in order to get a truly complete picture of CPU performance you've got to do all sorts of different workloads and between the benchmarks and real world performance you get kind of a muddle of:

"In highly threaded apps, AMD's high core CPU's stomp everything, especially for the price. Single core performance still belongs to Intel."

And then from there individual users need to figure out which CPU's are best for their use case. And a lot of users kind of want it summed up instead of sorting that out.

That aside AMD has a really good product this generation, compared to Bulldozer and it does hark back the previous decade where AMD's CPUs were truly viable competition to Intel's.

I'm not sure how Passmark derives their scores exactly. But based on what I'm seeing for all sorts of reviews is that Threadripper is a formidable chip and between an extremely high number of cores and low price I can't help but believe that spot is earned. Even if it is ultimately meaningless. It won't last forever either, but it will force Intel to respond. And that's what we've all wanted for the last 10 years.


It seems to have been representative enough during the time when Intel's CPUs were wiping the floor with the AMD CPUs on the very same benchmarks.


SPEC is very useful, but it's harder to get results, much less people who have access to the suite, and there is a strong influence of the compiler (SPEC2017 will tone this down early on, but there's almost no results out).

I believe AMD released AOCC partially to get a competitive position for SPEC. (Remember the controversial slides with the non-reportable gcc 6.3 results)

Don't get me wrong, I would love CPU2017 results for TR vs 7820X etc.



So, based on that performance test then AMD should have the same performance as my two E5-2660v2 at half the price. That's pretty impressive.


The top CPU is almost 4 times faster than my CPU (i5 6600K). Is it just down to the GHz and number of cores?

Besides those two elements, what makes this processor so much faster?


Pipeline depth, cache sizes and types, number of execution units, memory links and bandwidth and on and on.

Absent are measurements for enterprise CPUs like the Xeon Platinum 8180M (street price about $13,000 USD) which trounces the fastest offering from AMD before even getting out of bed. [0]

Note 0: For multi-threaded uses. For single-threaded use-cases, the Xeon Gold 6144 has the potential to be faster:

- Gold 6144 vs. Platinum 8180M

- Cores: 8 vs. 28

- Turbo: 4.2 GHz vs. 3.8 GHz

- L2: 8x1 MiB vs. 28x1 MiB

- L3: 24.75 MiB vs. 38.5 MiB

It's really a shame that AMD isn't as comprehensively competitive as it could be in enterprise, because Intel could get somewhat lazier/pricier in some areas without effective competition.


- Are you really comparing a $1000 CPU to a $13,000 CPU?

- Threadripper is not Enterprise. Are you not aware of Epyc?


From what I can tell he's disappointed that AMD don't offer a $13,000 CPU that competes with Intel.

Epyc falls short of the Xeon Platinum 8180 [0]

[0]: https://www.reddit.com/r/intel/comments/6ip3oi/xeon_platinum...


It's really neither of those things. Example: see Mediatek. They throw a billion cores and good cpu speed at a problem to no real results. Also see, AMD of the last 5ish years.

IPC is where the speed comes from, in conjunction with cores/hyperthreading. I won't attempt to explain it as I'm not qualified, but it's interesting reading.


Well yeah, they're all part of the whole. Neither thing is worth a damn without the other. A 1mhz single core CPU with 1000 IPC isn't going to come close to a high end CPU in 2017.

Having a decent IPC coupled with a decent clock-speed, a decent architecture, and a good number of cores will get you a solid CPU. Ideally you have great numbers in every category that contributes to performance. But if your numbers are crummy in a category, poor IPC, low cores or poor clock speed that's going to have some serious performance implications unless your competition is also suffering from some sort of deficiency that keeps you on par.

Ryzen addresses a lot of Bulldozer's weaknesses. But even Bulldozer was fairly competitive in highly threaded workloads compared to an i5. Having a truly competitive CPU with a high number of cores is just a different ballgame. And it's forced an Intel response. We've had about 7-8 years of the Core i series, and now suddenly when Ryzen appears we get an i9 with a serious number of cores. Because with multithreaded applications Intel's best 4 or 6 core i7's can't compete. Their per core performance isn't nearly superior enough to outperform 2-4x as many cores on a Ryzen CPU.


That, and they literally pack multiple dies into a single "package."

https://www.extremetech.com/computing/253416-amd-explains-th...


Those really are not why a 2017 CPU is faster than a 2012 CPU. It's much more pipeline architecture than anything else.


It's a highly clocked 16 core (x2 = 32 SMT thread) part with good caches, decent IPC, and ok cache coherency fabric.


It would really depend on the work you're doing.

Programs that use single cores, or only a handful of threads won't run four times faster on the 1950X or the i9 compared to your i5. Workloads that can use as many cores as you can throw at it, will naturally benefit from extra cores.

Your i5 has 4 cores, and 1950x has 16 cores, and can execute 32 threads. Unless the architecture was truly inferior (which it's not) that's quite a disparity for your i5 to compete with in highly threaded workloads.

Ryzen's architecture is solid. It can reach competitive clock speeds. The number of cores just add a lot more raw computing power. That's going to be far and away the biggest difference when looking at benchmarks that can use all the cores.


L1/2/3 cache sizes, better instruction sets, improved instruction pipelines, more and larger registries... Some of the improvements come from having better SW too (e.g. better/more modern compilers). EDIT: grammar


It does seem to reward number of cores. The old AMD 2012 FX CPUs with 6/8 cores that run slowly are listed as some of the best CPU/$ deals, despite their poor single-core performance making them useless for e.g. gaming.


I'd love to see how POWER8/9, SPARC64 XII and SPARC M7 stack up.

If someone could throw in some z14 PU benchmarks, I'd be more than happy. Are AMD's server-grade EPYC parts available?

(edit: I get it. These are mostly desktop processors with some low-end server parts thrown in. It's not a comprehensive high-end CPU benchmark, as it misses the whole E7 family)


I would like to point out that even in some benchmarks AMD is not on the top, it doesn't mean it's no the best buy product. There are other factors like heat production and power consumption, which Ryzen has even 3 times lower than equivalent Intel CPU.


One thing I think is that AMD has even more room to shine as software gets better at parallelization. I saw this trend back during my days managing 250gb+ data generation/day at a genetics company, and eventually got to build a 4 cpu, 64 core AMD Opteron system for physics computation. I am super-excited about the new server line of CPU's, because the Opteron line wasn't perfect and I expect they learned a lot from it. Also, I only need to get 2 cpu's to get that 64 core count again! I dream about supermicro or someone doing a 4 cpu board for the new line... 128 cores... (hey, I can dream!)


Why aren't Intel Xeon E7 processors considered in these benchmarks? Clearly they are "high-end" and carry a $5k+ price tag to show for it.


they are user-contributed.


Why is the "AMD Ryzen 7 PRO 1700X" testing significantly higher than the 1800X?

And why don't any of the AMD chips have clock speeds listed?


Amazed by the little difference between the following two Ryzens and their scores:

- Ryzen 5 1600x - Passmark: 13130; single: 1942

- Ryzen 7 1700 - Passmark: 13819; single: 1760

Why would you go for octa-core which is only marginally faster in multi-core workloads, but considerably slower in single-threaded workloads. On top of that, it's 25% more expensive.


Single core performance on the i9 is still going to be significantly higher though, so it depends on your use case. Testing I've seen so far has had the i9 getting about 20% higher framerates in games, for example.


There are also more i9 chips coming out this month and next. The $1200 12 core 7920X will probably have roughly the same multicore performance as the TR 1950X, but while maintaining significantly better single core performance. So, in my opinion, if you're looking for an affordable badass workstation chip, but you also want top of the line game performance, wait until the end of August and get an i9 7920X.


What this review does not mention in its price tag - is the cost of the cooling system. AMD claims liquid cooling is required for Threadripper.


Wonder what Epyc will look like.


That explains the name. They ripped Intel a new one.


In case someone from AMD is reading this: guys, you need to fix stability issues on Linux, or your Epyc is DOA. People are having some serious lock-up trouble with Ryzen, even with the latest AGESA updates and kernels. This is the only issue preventing me from recommending to purchase threadripper workstations at work. We do need that pcie bandwidth for GPU, but we absolutely can't tolerate instability.


Epyc has another stepping than Ryzen and, as far as I know, nobody could reproduce the "kill-ryzen" SEGFAULT on Epyc or Threadripper.


I sure hope so, but given the epic fail of Ryzen, I'm going to wait for others to beta test it for me, and/or for some sort of explanation and fix from AMD.


Well then you likely won't get the information you need as quickly as you need it.

Buy one, test it in your environment, file bug reports if you come across any. You don't need to replace your entire fleet in one go; Plan a phased rollout, starting with one workstation running the most aggressive of your workloads.


https://www.phoronix.com/scan.php?page=news_item&px=Ryzen-Se...

"AMD was also able to confirm this issue is not present with AMD Epyc or AMD ThreadRipper processors".


I had a lot of issues: obviously Ubuntu 17.04 doesn't boot, but Fedora 26 crashed a lot seemingly no matter what I changed. Windows 10 also crashed fairly often. I was about say that it wasn't as bad as F26, but given the automatic restart on crash that I kept seeing, it might have been as bad.

Currently using Arch/4.12-zen/nvidia with Gigabyte Gaming K5/Ryzen 7 1700X/GTX1060 and since kicking nouveau out, I have had no crashes at all. I am hopeful that this means my Ryzen woes are over.


AMD has confirmed that they've been able to replicate the Linux bug and that it only happens on Ryzen, ThreadRipper and Epyc do not suffer from that issue.


They also have also inadvertently confirmed that their Linux testing efforts are lacking. Which just blows my mind TBH, because they're also in the server market, dominated completely by Linux.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: