Cognitive computer explained

A cognitive computer is a computer that hardwires artificial intelligence and machine learning algorithms into an integrated circuit that closely reproduces the behavior of the human brain.[1] It generally adopts a neuromorphic engineering approach. Synonyms include neuromorphic chip and cognitive chip.[2] [3]

In 2023, IBM's proof-of-concept NorthPole chip (optimized for 2-, 4- and 8-bit precision) achieved remarkable performance in image recognition.

In 2013, IBM developed Watson, a cognitive computer that uses neural networks and deep learning techniques.[4] The following year, it developed the 2014 TrueNorth microchip architecture[5] which is designed to be closer in structure to the human brain than the von Neumann architecture used in conventional computers.[1] In 2017, Intel also announced its version of a cognitive chip in "Loihi, which it intended to be available to university and research labs in 2018. Intel (most notably with its Pohoiki Beach and Springs systems[6] [7]), Qualcomm, and others are improving neuromorphic processors steadily.

IBM TrueNorth chip

TrueNorth was a neuromorphic CMOS integrated circuit produced by IBM in 2014.[8] It is a manycore processor network on a chip design, with 4096 cores, each one having 256 programmable simulated neurons for a total of just over a million neurons. In turn, each neuron has 256 programmable "synapses" that convey the signals between them. Hence, the total number of programmable synapses is just over 268 million (228). Its basic transistor count is 5.4 billion.

Details

Memory, computation, and communication are handled in each of the 4096 neurosynaptic cores, TrueNorth circumvents the von Neumann-architecture bottleneck and is very energy-efficient, with IBM claiming a power consumption of 70 milliwatts and a power density that is 1/10,000th of conventional microprocessors.[9] The SyNAPSE chip operates at lower temperatures and power because it only draws power necessary for computation.[10] Skyrmions have been proposed as models of the synapse on a chip.[11] [12]

The neurons are emulated using a Linear-Leak Integrate-and-Fire (LLIF) model, a simplification of the leaky integrate-and-fire model.[13]

According to IBM, it does not have a clock,[14] operates on unary numbers, and computes by counting to a maximum of 19 bits.[5] [15] The cores are event-driven by using both synchronous and asynchronous logic, and are interconnected through an asynchronous packet-switched mesh network on chip (NOC).[15]

IBM developed a new network to program and use TrueNorth. It included a simulator, a new programming language, an integrated programming environment, and libraries.[14] This lack of backward compatibility with any previous technology (e.g., C++ compilers) poses serious vendor lock-in risks and other adverse consequences that may prevent it from commercialization in the future.[14]

Research

In 2018, a cluster of TrueNorth network-linked to a master computer was used in stereo vision research that attempted to extract the depth of rapidly moving objects in a scene.[16]

IBM NorthPole chip

In 2023, IBM released its NorthPole chip, which is a proof-of-concept for dramatically improving performance by intertwining compute with memory on-chip, thus eliminating the Von Neumann bottleneck. It blends approaches from IBM's 2014 TrueNorth system with modern hardware designs to achieve speeds about 4,000 times faster than TrueNorth. It can run ResNet-50 or Yolo-v4 image recognition tasks about 22 times faster, with 25 times less energy and 5 times less space, when compared to GPUs which use the same 12-nm node process that it was fabricated with. It includes 224 MB of RAM and 256 processor cores and can perform 2,048 operations per core per cycle at 8-bit precision, and 8,192 operations at 2-bit precision. It runs at between 25 and 425 MHz. [17] [18] [19] [20] This is an inferencing chip, but it cannot yet handle GPT-4.

Intel Loihi chip

Pohoiki Springs

Pohoiki Springs is a system that incorporates Intel's self-learning neuromorphic chip, named Loihi, introduced in 2017, perhaps named after the Hawaiian seamount Lōʻihi. Intel claims Loihi is about 1000 times more energy efficient than general-purpose computing systems used to train neural networks. In theory, Loihi supports both machine learning training and inference on the same silicon independently of a cloud connection, and more efficiently than convolutional neural networks or deep learning neural networks. Intel points to a system for monitoring a person's heartbeat, taking readings after events such as exercise or eating, and using the chip to normalize the data and work out the ‘normal’ heartbeat. It can then spot abnormalities and deal with new events or conditions.

The first iteration of the chip was made using Intel's 14 nm fabrication process and houses 128 clusters of 1,024 artificial neurons each for a total of 131,072 simulated neurons.[21] This offers around 130 million synapses, far less than the human brain's 800 trillion synapses, and behind IBM's TrueNorth.[22] Loihi is available for research purposes among more than 40 academic research groups as a USB form factor.[23] [24]

In October 2019, researchers from Rutgers University published a research paper to demonstrate the energy efficiency of Intel's Loihi in solving simultaneous localization and mapping.[25]

In March 2020, Intel and Cornell University published a research paper to demonstrate the ability of Intel's Loihi to recognize different hazardous materials, which could eventually aid to "diagnose diseases, detect weapons and explosives, find narcotics, and spot signs of smoke and carbon monoxide".[26]

Hala Point

Intel's Loihi 2, named Pohoiki, was released in September 2021 with 64 cores.[27] It boasts faster speeds, higher-bandwidth inter-chip communications for enhanced scalability, increased capacity per chip, a more compact size due to process scaling, and improved programmability.[28]

Intel claimed in 2024 that Hala Point was the world’s largest neuromorphic system. It uses Loihi 2 chips. It is claimed to offer 10x more neuron capacity and up to 12x higher performance.

Hala Point provides up to 20 quadrillion operations per second, (20 petaops), with efficiency exceeding 15 trillion (8-bit) operations S-1 W-1 on conventional deep neural networks. This rivals levels achieved by GPU/CPU architectures. Hala Point packages 1,152 Loihi 2 processors produced on Intel 3 process node in a six-rack-unit chassis. The system supports up to 1.15 billion neurons and 128 billion synapses distributed over 140,544 neuromorphic processing cores, consuming 2,600 watts of power. It includes over 2,300 embedded x86 processors for ancillary computations.

Hala Point integrates processing, memory and communication channels in a massively parallelized fabric, providing 16 PB S-1 of memory bandwidth, 3.5 PB S-1 of inter-core communication bandwidth, and 5 TB S-1 of inter-chip bandwidth.

The system can process its 1.15 billion neurons 20 times faster than a human brain. Its neuron capacity is roughly equivalent to that of an owl brain or the cortex of a capuchin monkey.

Loihi-based systems can perform inference and optimization using 100 times less energy at speeds as much as 50 times faster than CPU/GPU architectures.

SpiNNaker

SpiNNaker (Spiking Neural Network Architecture) is a massively parallel, manycore supercomputer architecture designed by the Advanced Processor Technologies Research Group at the Department of Computer Science, University of Manchester.[29]

Criticism

Critics argue that a room-sized computer – as in the case of IBM's Watson – is not a viable alternative to a three-pound human brain.[30] Some also cite the difficulty for a single system to bring so many elements together, such as the disparate sources of information as well as computing resources.[31]

In 2021, The New York Times released Steve Lohr's article "What Ever Happened to IBM’s Watson?".[32] He wrote about some costly failures of IBM Watson. One of them, a cancer-related project called the Oncology Expert Advisor,[33] was abandoned in 2016 as a costly failure. During the collaboration, Watson could not use patient data. Watson struggled to decipher doctors’ notes and patient histories.

See also

Further reading

Notes and References

  1. Witchalls. Clint. A computer that thinks. New Scientist. November 2014. 224. 2994. 28–29. 10.1016/S0262-4079(14)62145-X. 2014NewSc.224...28W.
  2. Book: Seo. Jae-sun. Brezzo. Bernard. Liu. Yong. Parker. Benjamin D.. Esser. Steven K.. Montoye. Robert K.. Rajendran. Bipin. Tierno. José A.. Chang. Leland. Modha. Dharmendra S.. Friedman. Daniel J.. 2011 IEEE Custom Integrated Circuits Conference (CICC). A 45nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons. September 2011. 1–4. 10.1109/CICC.2011.6055293. 978-1-4577-0222-8. 18690998. https://ieeexplore.ieee.org/document/6055293. 21 December 2021.
  3. News: Samsung plugs IBM's brain-imitating chip into an advanced sensor. 21 December 2021. Engadget.
  4. Book: KELLY. JOHN E.. Smart Machines: IBM's Watson and the Era of Cognitive Computing. HAMM. STEVE. 2013. Columbia University Press. 10.7312/kell16856. 10.7312/kell16856. 9780231537278.
  5. Web site: 2016-12-19. The brain's architecture, efficiency… on a chip. 2021-08-21. IBM Research Blog. en-US.
  6. Web site: Intel's Pohoiki Beach, a 64-Chip Neuromorphic System, Delivers Breakthrough Results in Research Tests. Intel Newsroom.
  7. Web site: Korean Researchers Devel. 30 March 2020.
  8. 10.1126/science.1254642. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science. 345. 6197. 668–73. 2014. Merolla. P. A.. Arthur. J. V.. Alvarez-Icaza. R.. Cassidy. A. S.. Sawada. J.. Akopyan. F.. Jackson. B. L.. Imam. N.. Guo. C.. Nakamura. Y.. Brezzo. B.. Vo. I.. Esser. S. K.. Appuswamy. R.. Taba. B.. Amir. A.. Flickner. M. D.. Risk. W. P.. Manohar. R.. Modha. D. S.. 12706847. 25104385. 2014Sci...345..668M.
  9. https://spectrum.ieee.org/how-ibm-got-brainlike-efficiency-from-the-truenorth-chip IEEE
  10. Web site: Cognitive computing: Neurosynaptic chips. 11 December 2015. IBM.
  11. Song. Kyung Mee. Jeong. Jae-Seung. Pan. Biao. Zhang. Xichao. Xia. Jing. Cha. Sunkyung. Park. Tae-Eon. Kim. Kwangsu. Finizio. Simone. Raabe. Jörg. Chang. Joonyeon. Zhou. Yan. Zhao. Weisheng. Kang. Wang. Ju. Hyunsu. Woo. Seonghoon. Skyrmion-based artificial synapses for neuromorphic computing. Nature Electronics. March 2020. 3. 3. 148–155. 10.1038/s41928-020-0385-0. 1907.00957. 195767210.
  12. Web site: Neuromorphic computing: The long path from roots to real life. 15 December 2020.
  13. Web site: 2016-12-19. The brain's architecture, efficiency… on a chip. 2022-09-28. IBM Research Blog. en-US.
  14. Web site: IBM Research: Brain-inspired Chip. 2021-08-21. www.research.ibm.com. 9 February 2021. en-ZZ.
  15. Book: Andreou. Andreas G.. Dykman. Andrew A.. Fischl. Kate D.. Garreau. Guillaume. Mendat. Daniel R.. Orchard. Garrick. Cassidy. Andrew S.. Merolla. Paul. Arthur. John. Alvarez-Icaza. Rodrigo. Jackson. Bryan L.. 2016 IEEE International Symposium on Circuits and Systems (ISCAS). Real-time sensory information processing using the TrueNorth Neurosynaptic System. May 2016. https://ieeexplore.ieee.org/document/7539214. 2911. 10.1109/ISCAS.2016.7539214. 978-1-4799-5341-7. 29335047.
  16. Web site: 2018-06-19. Stereo Vision Using Computing Architecture Inspired by the Brain. 2021-08-21. IBM Research Blog. en-US.
  17. Web site: IBM Debuts Brain-Inspired Chip For Speedy, Efficient AI - IEEE Spectrum. 2023-10-30. IEEE. en.
  18. Web site: Afifi-Sabet. Keumars. 2023-10-28. Inspired by the human brain — how IBM's latest AI chip could be 25 times more efficient than GPUs by being more integrated — but neither Nvidia nor AMD have to worry just yet. 2023-10-30. TechRadar. en.
  19. Modha. Dharmendra S.. Akopyan. Filipp. Andreopoulos. Alexander. Appuswamy. Rathinakumar. Arthur. John V.. Cassidy. Andrew S.. Datta. Pallab. DeBole. Michael V.. Esser. Steven K.. Otero. Carlos Ortega. Sawada. Jun. Taba. Brian. Amir. Arnon. Bablani. Deepika. Carlson. Peter J.. 2023-10-20. Neural inference at the frontier of energy, space, and time. Science. en. 382. 6668. 329–335. 10.1126/science.adh1174. 37856600 . 2023Sci...382..329M . 264306410 . 0036-8075.
  20. Web site: Modha. Dharmendra. 2023-10-19. NorthPole: Neural Inference at the Frontier of Energy, Space, and Time. 2023-10-31. Dharmendra S. Modha - My Work and Thoughts. en-US.
  21. Web site: Why Intel built a neuromorphic chip. ZDNET.
  22. Web site: "Intel unveils Loihi neuromorphic chip, chases IBM in artificial brains". October 17, 2017. AITrends.com . October 17, 2017 . August 11, 2021 . https://web.archive.org/web/20210811174513/https://www.aitrends.com/future-of-ai/intel-unveils-loihi-neuromorphic/ . dead .
  23. Web site: Intel Ramps Up Neuromorphic Computing Effort with New Research Partners. Feldman, M.. TOP500. 7 December 2018. 22 December 2023.
  24. Web site: Loihi - a brief introduction. Davies, M.. Intel Corporation. 2018. 22 December 2023.
  25. Book: Tang. Guangzhi. Shah. Arpit. Michmizos. Konstantinos.. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Spiking Neural Network on Neuromorphic Hardware for Energy-Efficient Unidimensional SLAM. 2019. 4176–4181. 10.1109/IROS40897.2019.8967864. 1903.02504. 978-1-7281-4004-9. 70349899.
  26. Imam. Nabil. Cleland. Thomas A.. Rapid online learning and robust recall in a neuromorphic olfactory circuit. Nature Machine Intelligence. 2020. 2. 3. 181–191. 10.1038/s42256-020-0159-4. 38650843 . 1906.07067. 189928531.
  27. Web site: Hruska, J. . 16 July 2019 . Intel's Neuromorphic Loihi Processor Scales to 8M Neurons, 64 Cores . 22 December 2023 . Ziff Davis.
  28. Web site: Peckham . Oliver . 2022-09-28 . Intel Labs Launches Neuromorphic 'Kapoho Point' Board . 2023-10-26 . HPCwire . en-US.
  29. Web site: Research Groups: APT - Advanced Processor Technologies (School of Computer Science - The University of Manchester). apt.cs.manchester.ac.uk.
  30. Book: Neumeier, Marty. Metaskills: Five Talents for the Robotic Age. New Riders. 2012. 9780133359329. Indianapolis, IN.
  31. Book: Cognitive Computing and Big Data Analytics. Hurwitz. Judith. Kaufman. Marcia. Bowles. Adrian. John Wiley & Sons. 2015. 9781118896624. Indianapolis, IN. 110.
  32. News: Lohr. Steve. 2021-07-16. What Ever Happened to IBM's Watson?. en-US. The New York Times. 2022-09-28. 0362-4331.
  33. Simon. George. DiNardo. Courtney D.. Takahashi. Koichi. Cascone. Tina. Powers. Cynthia. Stevens. Rick. Allen. Joshua. Antonoff. Mara B.. Gomez. Daniel. Keane. Pat. Suarez Saiz. Fernando. Nguyen. Quynh. Roarty. Emily. Pierce. Sherry. Zhang. Jianjun. June 2019. Applying Artificial Intelligence to Address the Knowledge Gaps in Cancer Care. The Oncologist. 24. 6. 772–782. 10.1634/theoncologist.2018-0257. 1083-7159. 6656515. 30446581.