Newton Howard Explained

Newton Howard is a brain and cognitive scientist, the former founder and director of the MIT Mind Machine Project[1] [2] at the Massachusetts Institute of Technology (MIT). He is a professor of computational neurology and functional neurosurgery at Georgetown University.[3] He was a professor of at the University of Oxford, where he directed the Oxford Computational Neuroscience Laboratory.[4] [5] He is also the director of MIT's Synthetic Intelligence Lab,[6] the founder of the Center for Advanced Defense Studies[7] and the chairman of the Brain Sciences Foundation.[8] Professor Howard is also a senior fellow at the John Radcliffe Hospital at Oxford, a senior scientist at INSERM in Paris and a P.A.H. at the CHU Hospital in Martinique.

His research areas include Cognition, Memory, Trauma, Machine Learning, Comprehensive Brain Modeling, Natural Language Processing, Nanotech, Medical Devices and Artificial Intelligence.

Education and career

Howard earned his B.A. from Concordia University Ann Arbor and an M.A. in Technology from Eastern Michigan University. He went on to study at MIT and at the University of Oxford where, as a graduate member of the Faculty of Mathematical Sciences, he proposed the Theory of Intention Awareness (IA).[9] He also received a Doctorate in Cognitive Informatics and Mathematics from the University of Paris-Sorbonne, where he was also awarded a Habilitation a Diriger des Recherches for his work on the Physics of Cognition (PoC).[10]

Howard is an author and national security advisor[11] [12] to several U.S. Government organizations[13] and his work has contributed to more than 30 U.S. patents and over 90 publications. In 2009, he founded the Brain Sciences Foundation (BSF), a nonprofit 501(c)3 organization with the goal of improving the quality of life for those suffering from neurological disorders.

Research

Howard is known for his Theory of Intention Awareness (IA),[14] which provides a possible model for explaining volition in human intelligence, recursively throughout all layers of biological organization. He next developed the Mood State Indicator (MSI)[15] a machine learning system capable of predicting emotional states by modeling the mental processes involved in human speech and writing. The Language Axiological Input/Output system (LXIO) was built upon this MSI framework and found to be capable of detecting both sentiment and cognitive states by parsing sentences into words, then processing each through time orientation, contextual-prediction and subsequent modules, before computing each word's contextual and grammatical function with a Mind Default Axiology. The key significance of LXIO was its ability to incorporate conscious thought and bodily expression (linguistic or otherwise) into a uniform code schema.

In 2012, Howard published the Fundamental Code Unit (FCU)[16] theory, which uses unitary mathematics (ON/OFF +/-) to correlate networks of neurophysiological processes to higher order function. In 2013, he proposed the Brain Code (BC)[17] theory, a methodology for using the FCU to map entire circuits of neurological activity to behavior and response, effectively decoding the language of the brain.[18]

In 2014, he hypothesized a functional endogenous optical network within the brain, mediated by neuropsin (OPN5). This self-regulating cycle of photon-mediated events in the neocortex involves sequential interactions among 3 mitochondrial sources of endogenously-generated photons during periods of increased neural spiking activity: (a) near-UV photons (~380 nm), a free radical reaction byproduct; (b) blue photons (~470 nm) emitted by NAD(P)H upon absorption of near-UV photons; and (c) green photons (~530 nm) generated by NAD(P)H oxidases, upon NAD(P)H-generated blue photon absorption. The bistable nature of this nanoscale quantum process provides evidence that an on/off (UNARY +/-) coding system exists at the most fundamental level of brain operation.

Transformers sculptures

See main article: Transformers (sculptures). In 2021 Howard installed two-ton (1,814 kg) sculptures depicting Bumblebee and Optimus Prime, characters from the Transformers media franchise, outside of his home in the Georgetown neighborhood of Washington, D.C. His inspiration for the sculptures came from his work with artificial intelligence and "because the Transformers represent human and machine living in harmony, if you will."[3] [19] The reaction from locals was mixed and he ran into legal issues with local government officials. He was eventually granted permission to keep the statues installed for a period of six months, but they remained after that time.[3] [19] [20]

Selected works

Books

Most-cited journal articles

External links

Notes and References

  1. Web site: MIT Mind Machine Project. Mind Machine Project. Massachusetts Institute of Technology.
  2. Web site: Rethinking artificial intelligence. December 7, 2009. MIT News. Massachusetts Institute of Technology. Chandler. David.
  3. News: Optimus Prime Faces A New And Unexpected Foe: Georgetown's Historic District . NPR . March 3, 2021 . May 8, 2022 . Austermuhle, Martin.
  4. Web site: Nuffield Department of Surgical Sciences. Nuffield Department of Surgical Sciences. University of Oxford.
  5. Web site: Oxford Computational Neuroscience Laboratory. Oxford Computational Neuroscience Laboratory. University of Oxford.
  6. Web site: Synthetic Intelligence Lab. Synthetic Intelligence Lab. Massachusetts Institute of Technology.
  7. Web site: Center for Advanced Defense Studies. Center for Advanced Defense Studies.
  8. Web site: Brain Sciences Foundation. Brain Sciences Foundation.
  9. Newton Howard, “Theory of Intention Awareness in Tactical Military Intelligence: Reducing Uncertainty by Understanding the Cognitive Architecture of Intentions", Author House First Books Library, Bloomington, Indiana. 2002.
  10. Web site: The Logic of Uncertainty and Situational Understanding. 1999. Center for Advanced Defense Studies (CADS)/Institute for the Mathematical Complexity & Cognition (MC) Centre de Recherche en Informatique, Université Paris Sorbonne. Howard. Newton.
  11. Web site: CWID - Coalition Warrior Interoperability Demonstration. 2007. CWID JMO. JMO. CWID.
  12. Web site: Joint C3 Information Exchange Data Model Overview. 2007. MIP-NATO Management Board. NATO. MIP.
  13. Web site: Development of a Diplomatic, Information, Military, Health, and Economic Effects Modeling System. 2013. Massachusetts Institute of Technology. Howard. Newton.
  14. Book: Howard, Newton. Theory of Intention Awareness in Tactical Military Intelligence: Reducing Uncertainty by Understanding the Cognitive Architecture of Intentions. Author House First Books Library. 2002. Bloomington, IN.
  15. LXIO: The Mood Detection Robopsych. Howard. Newton. January 2012. The Brain Sciences Journal. 10.7214/brainsciences/2012.01.01.05. Guidere. Mathieu. 1. 98–109.
  16. Web site: Brain Language: The Fundamental Code Unit. 2012. The Brain Sciences Journal. Brain Sciences Foundation. Howard. Newton.
  17. Book: Howard, Newton. 8265. 430–463. 2013. Springer. Advances in Artificial Intelligence and Its Applications . 10.1007/978-3-642-45114-0_35. The Twin Hypotheses. Lecture Notes in Computer Science. 978-3-642-45113-3.
  18. Book: Howard, Newton. The Brain Language. Cambridge Scientific Publishing. 2015. 978-1-908106-50-6. London, UK.
  19. News: You Gotta Love These Two Enormous Transformers Statues This Guy Erected on His Fancy Georgetown Block . Slate . March 4, 2021 . May 8, 2022 . Kois, Dan.
  20. News: Transformers Carry On . The Georgetown Metropolitan . January 3, 2022 . May 8, 2022 . Mathews, Topher.