Cool little article on a great fossil find shedding light on early insect evolution. Coxoplectoptera - Looks like a chimera with parts from dragonflies, praying mantis and mayflies but yet their larvae look like freshwater shrimp? Supposedly they were ambush predators. Just sounds cool.
Been away for awhile. Lots of work, uncertainty and stress during past 9 months on the personal and career fronts. Currently in the midst of a mad blast of catch up reading, re-ramping up on artificial intelligence, robotics and software fronts.
Ben has been a leading figure in the generalized AI community for quite some time now. He's become a seasoned leader in a variety of AI and futurism based conferences and gatherings and his popularity outide this community is starting to build due to all the Singularity buzz. He seems like a good dude, knows his shit and seems to have just enough pragmatism (probably from years of trying to build a real generalized AI system in his Novamente architecture) that whatever he's currently reading and looking into is probably a worthwhile subject to follow up on.
Ben's most recent blog titled Unraveling Modha and Singhs Map of Macaque is an interesting dive into the robust functional cortical maps created by IBM researchers Dharmendra Modha and Raghavendra Singh. These maps illustrate a distinct cortical network architecture between major functional areas and subnetworks of the brain. Check out their paper, datasets, an audio moderated powerpoint slide presentation and a short blog post directly from the researchers regarding their superlative work.
Ben provides an interesting restructuring of the results into a textual hierarchical format to further clarify inbound, outbound connectivity of the cortical networks.
Hierarchical Temporal Memory
Catching up on progress from Jeff Hawkins HTM venture Numenta. Looks like a couple of papers and videos I need to start with .. I better go and start this list (copy and pasted from Numenta)
Brains, Minds and Machines (video) MIT recently held a symposium on Brain, Minds and Machines as part of their 150th anniversary celebration. Jeff Hawkins was invited to participate on a panel that addressed the question: Is it time to try again to understand the brain and engineer the mind? Jeff presents his answer to this question in this 10-minute video. This talk was given on May 4, 2011.
Advances in modeling neocortex and its impact on machine intelligence (video) Jeff Hawkins presents the new HTM algorithms for a graduate class on neural computation at the University of California at Berkeley. This talk is very similar to the Beckman Institute talk above, but might be useful for those viewers who want to hear the talk given again with some shades of difference. In addition, this talk includes a question & answer section. Note that the video is not high quality. This talk was given on December 2, 2010.
Quantum Mechanics - the most successful description of all things tiny in the universe has some very weird results and implications. One central tenet is the fact that quantum events are not "actualized" until observed or interacted with in some way or the other... pretty much saying the universe exists in all possible states at one time until observed. Wacky shit.
Einstein despised this view of the nature of the universe therefore year after year during the 1920s and 1930s, he constructed elegant thought experiments that he felt could refute the quantum view of the world. Each thought experiment was eventually refuted - usually by Neils Bohr - until a famous paper from Einstein and 2 other collaborators - the EPR (Einstein-Rosen-Podolski) Paper - hit the presses. This "perfect" paper elaborated on a theoretical experiment and a set of arguments that posited the following (in simple terms)....
If quantum mechanics theory is correct about the nature of quantum events.... then 2 particles that are sourced from the same interaction (entangled) have a curious and "spooky" property... that any subsequent observations on the entangled properties of these particles are intimately linked and correlated. In other words, no matter how far these particles move from each other - a measurement of an entangled property will "force" the other particles entangled property to be disambiguated in a correlated sense - no matter how far they were from each other in space or time. Since Einstein's theory of special relativity disallows instantaneous communication across space - then this result is quite strange.
What transpired was probably Einstein's worst nightmare. He was right in the sense that Quantum Mechanics could not be complete without allowing this "spooky" non-locality based condition to exist. John Bell, a brilliant American scientist, produced a famous inequality and major clarifications of quantum mechanics theory that provided a gateway for experimentalists to create experiments sensitive enough to test entanglement.
Experiments subsequently showed that this "spooky" condition actually did exist - and this cemented Quantum Mechanics as the strangest, spookiest and most accurate description of reality ever constructed.
Einstein, a founder and the most fervent critic of quantum mechanics, never ceases to amaze me... even as he tries to disprove theories, he helped advance and discover new components - such as entanglement.
Interesting new technologies have been derived from quantum entanglement properties, such as quantum teleportation, quantum cryptography and quantum computation but besides these groundbreaking technologies - arguably quantum entanglement's greatest legacy is the insight it provides into the nature of our universe.
Stephen Wolfram is an interesting dude. He's the founder of Wolfram Research and the inventor of the highly regarded Mathematica software platform. His social / presentation skills leave alot to be desiered - he occasionally comes across with a bombastic condescending tone - but not intentionally in my opinion - more like the product of a nerdy adolescence where a majority of his energies went into intellectual explorations rather than the "hormonal explorations" normal adolescents are guilty of... damn selfish genes.
China GDP / US GDP - This cool little query understands to perform a mathematical division between the GDPs of both countries. Notice the exponential nature of China's GDP growth curve relative to the US after 2000 !!
United States / Russia - This query was understood by the engine to provide me ratios "of interest" between the US and Russia.
Sounds simple, but any software engineer will tell you that you need a set of specifications to even begin to code a solution to the above queries - but the engine performed some of these effortlessly.
Now many queries I tried (some admittedly non-sensical) were not understood by WolframAlpha and were substituted with an assumption that many times closely resembled the base query.
For what its worth, in my opinion this is an impressive engine - having experience in AI and large scale software system design - Not sure if there's a better implementation of dynamic (query-based) knowledge delivery on a web platform outside of Google & Bing especially when you consider the computational nature of the engine.
Been away for awhile having babies :) Needed to pick up some new software consulting gigs so I had to put the artificial intelligence (AI) projects down for a while -- but they're on the way back.
Been watching my baby slowly achieve greater coordination in his motor control... from seemingly random movements to more forceful, directed movements from head stabilization to utilizing his arms, hands and feet to cling or climb up my chest.
Interesting little guy, with seemingly no motives but to feed, process food and "export processed food" -- hence the 8-12 diapers each 24 hour period.
Even though I haven't spent time enhancing our AI algorithms, I still try to keep in the forefront of neuro and AI research by reading books and listening to podcasts. If you're interested in neuroscience and/or artificial intelligence, I recommend you listen to "The Brain Podcast" by Dr. Ginger Campbell - I've been listening to this Podcast for a year now - an excellent trove of information can be gathered from these insightful interviews of the latest in neuroscience research.
Change, change, change. President Obama built a legendary campaign around this notion. I've always been intrigued by change on many levels. Newton's entire calculus is based around the analysis of change of a function with respect to some attribute. Same with Newtonian laws of motion... its all really just an analysis of changes (deltas) in some position (x, y or z) with respect to a delta in time.
Delta this and delta that.... seems like all information is really just composed of deltas. When you compress information or compress an image full of colors...if the same color runs across a large segment of the photo you can simply represent that entire range of the same color (i.e. information) by specifying the color and the number of pixels that same color runs for (also known as run-length encoding)... which is why its compressible - because no new information exists within that range and can therefore be ignored -no deltas - nothing new to report.
Emotionally, humans also categorize the greatest achievements to the events that were performed with the largest distances (deltas) traversed to achieve the goal. Man on the moon, poor child becomes millionaire, etc. The human brain seems to have evolved to only detect deltas - it doesn't just blindly capture all sensory information in a given time, it does most of its magic on the relationship between deltas that exist in its environment within a given time context - an ingenious optimization if you believe there's only "useful information" in deltas.
So what are the smallest deltas possible? ... physicists categorize these indivisibly tiny units as Planck units. There are units of space(length) and time that are the smallest measurable units possible in our universe.
Planck length = approx. 1.616 x 10−35meters Planck time = approx. 1.616 x 10-43seconds
If the premise regarding deltas is correct - that useful information is available only when things change - then I guess no useful information exists below these units... supposedly the Heisenberg Uncertainty Principle dooms us to never measure anything smaller than this quantum of measurement. hmmmmm.... what does that mean?