Hardware pentru Deep Learning

Saptamana trecuta s-a lansat placa video Titan X(It has 12 billion transistors, 3,584 CUDA cores at 1.53GHz and 12GB of GDDR5X memory at 480GBps), aceasta are o putere de procesare de 11 TFlops intr-un singur chip, costul ajunge undeva la 1200$, ceea ce mi se pare interesant este puterea de procesare, avand in vedere ca citisem despre o companie care face research in drug discovery cu deep learning si aveau o masina cu mai multe placi video NVidia care ajungeau impreuna la o putere de procesare la 20 TFlops.

De asemenea daca folosesti un serviciu in cloud de GPU computing ai de ales intre placi video NVidia cu Grid technology sau Tesla technology, cele din urma fiind mai puternice dar si mai expensive.


În cazul în care vrei să vezi Titan X la treabă:


Cineva nu a auzit de Xeon Phi si faptul ca urmatoarea generatie it will kinda kick Tesla GPUs around the clock. De altfel sa nu uitam ca GPUs are somewhat limited whilst the Xeon Phi is just a boat-load of x86 cores that can run any kind of logic.

si daca tot suntem la stuff kicking other stuff: IBM’s POWER8 killing a Xeon:


Eight hardware threads per core si max 10 instructions per clock - yummie.

De altfel nVidia’s stock a fost clasat ca Underperform tocmai din motivele de mai sus.

Disclaimer: I own a 980 GTX.

Ah si noul Titan X este an overpriced 1080 Ti. Where the heck is our HBM2 card?


@dakull in articol compara si Xeon Phi cu placile NVidia pentru deep learning, spune de ce placile NVidia sunt mai bune decat Xeon Phi si ca performantele GTX 1060 sunt de 5 ori mai bune decat Amazon GPU Cloud fiind cea mai buna solutie in raportul pret calitate.

Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning - and compared to the cloud

The GTX 1060 is the best entry GPU for when you want to try deep learning for the first time, or if you want to occasionally use it for Kaggle competition. Its 6GB memory can be quite limiting, but for many applications it is sufficient. The GTX 1060 is slower than a regular Titan X, but it is comparable in both performance and eBay price of the GTX 980. Overall this is clearly the best bang for the buck you can get. A very solid choice. However, these new Pascal cards have a are not ideal for deep learning with their 6GB or 8GB memory. That memory is quite sufficient for most tasks, for example for Kaggle competitions, most image datasets, deep style and natural language understanding tasks. But for researchers, especially if they work on ImageNet, video data or natural language understanding tasks with huge context data the regular GTX Titan X or GTX Titan X Pascal with 12GB of memory will be better – memory is just too important here. Researchers should definitely go with the new GTX Titan X Pascal due to its better speed. However, the cost difference is quite significant and if you need the memory a GTX Titan X from eBay is a very solid choice. If you already own regular GTX Titan Xs, then you should think again if the additional investment and the hassle with selling the old cards and buying the new is really worth it. I will probably keep my regular GTX Titan Xs. The options are now more limited for people that have very little money for a GPU. GPU instances on Amazon web services are quite expensive and slow now and no longer pose a good option if you have less money. I do not recommend a GTX 970 as it is slow, still rather expensive even if bought in used condition and there are memory problems associated with the card to boot. Instead, try to get the additional money to buy a GTX 1060 which is faster, has a larger memory and has no memory problems.

1 Like

this pic says it all:

dar da, temporar CUDA looks like the best performing proprietary tech. out there.

Xeon Phi - in schimb este bootable :slight_smile: si well, x86. Hopefully it will shake things up. Iar din asta si nVidia fans will win: lower prices for Titans.

1 Like