Consciousness

If we knew what we were doing, it wouldn't be called RESEARCH would it?

Albert Einstein

(1879 - 1955)

Consciousness

Consciousness

There are many theories on how consciousness works in the brain but at present they are all grasping at straws. Such examples are the Integrated Information and the Global Workspace theories. There is a common thread running through most of them, that there is a constant flow of ‘ideas’ flowing from brain areas and one or two can gain traction to be the prime candidate. This ‘thought’ can be spotlighted as long as it can hold attention and awareness, this is of course can also be instigated by a direct threat or advantage. Many consider consciousness to be an organic property and cannot be present in synthetic entities. Others that consciousness is a by-product of the density of neuronal connections, or similar, that can be packed into a discrete area. Make an AI thing smart enough and consciousness follows. The perceived advantage of consciousness is that it directs thoughts and actions in a focused way. Deeper analysis, long term goals, predictions and higher social awareness are some of the goodies that come with it. Some of the differences between us and them are emotions and intuition. One can assume that AI will be logical and not get worried about what people think about them, for instance.

Another popular approach is that consciousness is a series of virtual machines/intelligent agents that originate at a lower brain area and are picked up by higher and higher levels. Some think that the neocortex area holds the key to thinking and lower virtual areas are either retrieved or called as required. These lower areas have the ability to organise themselves into useful information that can be relevant. If this is the case, then this model could also apply to AI. It has been found that these VM units are specific to certain actions, such as when you eat something. The chemical reaction is only a part of the response but also whether it should be spat out, or kept and associated memories are linked to this, of pleasure, past history and even guilt. Another theory is becoming prevalent is that we work to a set of rules. These rules are transferable and can be reused for similar events.

There is a lot of work being done on relating actions to parts of the brain. This has been very beneficial in Parkinson’s disease and others but does not explain what is going on under the hood. Advances in genetics and epigenetics also help us understand how neurotransmitters and hormones affect the human body. Neurons are really small and even now we can either work on a single one that is found in the squid or many (voxels) but not in the tens or hundreds. Also C. elegans is a roundworm with approximately 300 neurons and 7000 connections and we still cannot explain how it moves. The many successes sometimes hide the fact that we have a long way to go in our hopeful discovery of what makes things tick. Other techniques such as microRNA, CRISPR, protein and virus substitution and transportation still show us that nature is best.

* A recent paper from Merton College Oxford Neuronal Computation Underlying Inferential Reasoning in Humans and Mice goes into great detail on how memories are formed and how inferences and predictions arise from these 'mind maps'. A very interesting thesis.

 

Shared

Common goals.

Although organic and synthetic entities are governed by their physical attributes they still have the same problems and targets to achieve. Both exist in the same world and both need methods to further their aims and survival. The ultimate shared final objectives are to carry out a required action, whether it is planned or homeostasis. Existing long enough can be a prerequisite in its own right. The high level heuristics depend on the receipt of data from external and internal sensors which stimulate some reaction from effectors that can be excitatory or inhibitory, these balance each other to form a timed neutrality. The two states of conscious and unconsciousness are different in their approach to actions, in AI this would be foreground and background processing but still similar in concept. The aware thoughts are likely to originate in the fore brain outer cortex while the default brain is located in the inner medial area. In both organic and complex synthetic entities, there is no single ‘idea’ but many competing possible alternatives. Intelligent agents can operate independently and combine to form a more complex procedure. This allows parallel actions to speed up the final answer. In humans the hardly understood conscious decision making is made at the highest level whilst the unconscious decisions are made at a lower level, often locally. Synthetic entities also have to distinguish between house keeping and making a decision that affects the whole outlook.What both have in common is that they are virtual systems and rely on sensors to access the external world. These views allow scenarios to be created to compute possible outlooks and actions.

The ability for synthetic systems to beat humans in certain games and other closed rule based endeavours has fanned publicity that they will take over the world. History has shown that disruption works for awhile and then its advantages become the norm and things settle down. One of the more interesting events has been when humans use fairly modest computers to outperform sophisticated machines. What we offer is direction and novel ideas that the synthetics lack. Of course machines might break away from their rule based straight jacket but this could be a long way off and even then we still have millions of years of evolutionary design behind us. Whether we become augmented cyborgs or human/machine teams this seems to be the most advantageous way to use this brave new technology.

If one assumes that human and AI have different strengths then a co-operative arrangement for complex tasks would seem to be the way forward. The Sampson Array Radar can track hundreds of aircraft at once but you would not want it shooting down the ones it thought was a threat. There is a lot of debate about autonomous AI making life and death decisions, some argue that future threats will be too fast for humans to do this and others that this is too important to be left to AI. With the advances in self-determining machines, ethics is becoming more important than agorithms.

The communication between us and intelligent systems is taking many avenues. Some argue that a direct link using some kind of insertion into the brain will allow instant information and commands to be issued. The problem with this is, although the brain may have areas that are concentrated on certain cognisance, every brain is different and all pathways are distributed and connected. Sticking a rod in the brain will not do it. Other interfaces such as a sensor cap, eye tracking etc. are superficial and do not offer the sophistication required. The natural medium is speech which ironically put us back where we started.

The brain is said to have only one truly focused thought in consciousness at one time, maybe two but both suffer a degradation. To support this focus, it is also thought that up to seven amalgamations of encoded information can be retrieved at any one time. These are called qualia by psychologists and chunks by neuroscientists. These chunks could be inter-related DNA built in behaviour, emotional historical data and logical predictive information, pared to contain a dense, quickly readable option guide. The chunks contain related data that offer information in blocks that speed up decision making.

It is worth noting that we are still in early days for understanding the brain and AI (and lots of other things), so as usual, one size does not fit all, we might have got it wrong and everything is open to change. We may have to look to how babies learn to extend our cognitive understanding. They have enormous capacity for absorbing lots of new data, some of it that may not be immediately useful but for future events.

 

Differences

 

Vive le difference?

One thing has become clear that humans are not just reliant on their brains but also their bodies. Input from the body not only provides data for the brain but also fine tunes the organism as a whole. Many actions are carried out at the local level and unconscious thought is integral to the whole entity. Whether robotics can integrate this synergy is debatable by many and dismissed by others as not important. Although there appears to be similarities in concept and design, it may be synthetic life is so different that its unique qualities such as neural speed and robustness would be pursued rather than sentient sameness. It is already accepted that AI will think like AI and any apparent empathy would be constructs and not real.

It could be generally said that life’s prime purpose is to produce more of its kind and intelligent machines to complete certain tasks. The ultimate aim of AI fundamentalists is to produce a machine with AGI (Artificial General Intelligence) which would be the equivalent of our own. Many doubt that this would be safe or useful. The main use would be robots that could go into hostile environments and do stuff that we cannot. This is fine until it refuses to go, arguing “that it’s dangerous in there pal”. As mentioned before the optimal option would be for AI to do the things that they are good at and leave the decision making to us. It may be safe to assume that AI would be logical in its approach to things but we have a non-logical side to us for a reason. Emotions are very strong in us and does serve a very important branch of our choices. Intuition, empathy, social, preferred choices etc. may not seem important but they are. Decisions are not only logical but are dependent on many shared historic outcomes that are not immediately evident.

The most obvious impact of AI is the consequence of the automation of tasks across a broad range of industries, transformed from manual to digital. Tasks or roles that include a degree of repetition or the consumption and interpretation of vast amounts of data are now delivered and processed by a computer, sometimes not needing the intervention of humans. AI technologies are constructed by mathematical processes that leverage increasing computing power to deliver faster and more accurate models and forecasts of operational systems, or enhanced representations and combinations of large data sets. However, while these advanced technologies can perform some tasks with higher efficiency and accuracy, human expertise still plays a critical role in designing and utilising AI technology. Human intelligence is what shapes the emergence and adoption of artificial intelligence and innovative solutions associated with it. It is human intelligence that seeks to ask ‘why’ and considers ‘what if’ through critical thinking. As engineering design continues to be challenged by complex problems and quality of data, the need for human oversight, expertise and quality assurance is essential in using AI generated outputs.

In the age of AI, understanding the function of work beyond merely sustaining a standard of living is even more important. It becomes a reflection of the fundamental human need for participation, co-creation, contribution, and a sense of being needed; and thus, must not be overlooked. So, in some way, even the ordinary and dull tasks at work become valuable and worthwhile, and if it is removed or has been automated, it should be replaced with something that provides the same for human expression and discovery.

With robots, AI, and automation taking some of the mundane and manual tasks out of our hands, professionals have more time to focus in thinking, delivering creative and innovative solutions, and actions that are beyond the reach of AI and are squarely in the domain of human intelligence. This may of course leave us totally on AI to do most tasks and leaving us bored with no purpose

When used with a purpose and not for technology’s sake, AI can unlock tons of opportunities for businesses and improve productivity and participation within the organisation. This, in turn, can result in an increase in demand for products and services and drive an economic growth model that delivers and improves the quality of living. Hopefully.

 

Hype

 

Most discussions about artificial intelligence are interspersed by hyperbole and hysteria. Though some of the world’s most prominent and successful thinkers regularly forecast that AI will either solve all our problems or destroy us or our society, and the press frequently report on how AI will threaten jobs and raise inequality, there’s actually very little evidence to support these ideas. What’s more, this could actually end up turning people against AI research, bringing significant progress in the technology to a halt.

The hyperbole around AI largely stems from its promotion by tech-evangelists and self-interested investors. Google CEO Sundar Pichai declared AI to be “probably the most important thing humanity has ever worked on”. Given the importance of AI to Google’s business model, he would say that.

Some even argue that AI is a solution to humanity’s fundamental problems, including death, and that we will eventually merge with machines to become an unstoppable force. The inventor and writer Ray Kurzweil has famously argued this “Singularity” will occur by as soon as 2045.

The hysteria around AI comes from similar sources. The likes of physicist Stephen Hawking and billionaire tech entrepreneur Elon Musk warned that AI poses an existential threat to humanity. If AI doesn’t destroy us, the doomsayers argue, then it may at least cause mass unemployment through job automation. The reality of AI is currently very different, particularly when you look at the threat of automation. Back in 2013, researchers estimated that, in the following ten to 20 years, 47% of jobs in the US could be automated. Six years later, instead of a trend towards mass joblessness, we’re in fact seeing US unemployment at a historic low.

AI is not even making advanced economies more productive. For example, in the ten years following the financial crisis, labour productivity in the UK grew at its slowest average rate since 1761. Evidence shows that even global superstar firms, including firms who are among the top investors in AI and whose business models depends on it such as Google, Facebook and Amazon, have not become more productive. This contradicts claims that AI will inevitably enhance productivity.

 

 

Droid

Droid

Before the discussion turns to the unique problems of the technical issues in producing a compact anthropological machine, the impact of society acceptance must be broached.

There is already a distinction made for male (android) and female (gynoid) varieties of these specialised robots. Gynoids would most probably be preferred for companions and carers as they offer the least visual threat. Society can be slow to welcome the new and religious beliefs may never allow the use of such interlopers.

Japan has already reached a young/old population imbalance and other countries are catching up fast. Culturally they have a more open mind to machination and this is where progress is being made at present. The problem of power is not so apparent if the distribution of top up outlets are easily contacted. This is of course a problem for external and heavy energy use. Also the smaller and more slender shape would constrict room from servo-motors and logic and sensor arrays. Droids might also be used as the front end for medical, teaching and other face to face interactions. Back end information could be piped to the droid with additional machines doing the real work. This could be the case in medical examinations where the soothing artificial voice could talk you through your procedure in sync with the specialised surgery equipment. Of course if the droid says “whoops” then you have got real problems.

If we assume that droids are humanoid robots, then this will present them with a unique set of problems. Not only are they restricted by their physical size and abilities but must be acceptable to the general human population.

 

Set of problems:

Fumbling

Little steps ...

The determination of some scientists to dedicate themselves to solving a tiny piece of one of life’s jigsaw parts, fills me with awe. Research is an ongoing thing that is as mercurial as the seasons change. Some of us have chosen to step back and try and see the “bigger picture” of who we are. It is not unfair to say that for all our successes we are still gathering the low hanging fruit. We all think we have “cracked it” at times, to discover that our fleeting euphoria was misplaced and sent us back to our drawing boards. It is easy to lose direction when we are seduced by a possible prize, in whatever form it takes. The one question we should always ask first, is what are we trying to do here. As obvious as it seems, many a project starts at the middle or through a misplaced sense of prior knowledge. It can also be hard to change direction when through financial or egoistical pressure it is deemed necessary to push ahead against logic. Recent research (bless it) points out that emotions have a higher score than rationality and nature above nurture. Perhaps this is what makes us human and prone to mistakes.

There have been many attempts to define consciousness including Ulric Neisser’s five point description. Some theories hold that “self” is a flowing thing that consists of constantly changing parts. When one sleeps, this flow is temporarily switched off until we wake again. It would seem more probable that some area of the brain holds a framework of basic continuity that allows us to wake up, without having to restore the network again. This is similar to the ROM in computers but only similar.

All life is reliant on limited resources and the brain is limited by this fact as well. It is a heavy user of energy and cognition is one the heaviest users. It is thought that there is a lower user in form of the “default network” which operates when focused thought is “switched off”. This network handles the production of qualia and other higher medium actions.

The background mind

Most of the activity in the brain is done by the unconscious areas, some of these actions are quite complex especially the ones in the cerebellum. This area has approximately half the neurons of the total brain situated here. It is much better than the pre-frontal cortex in making quick decisions about movement etc. It is the centre of co-ordination and uses past similar occurrences as a template. In spite of this it is not considered capable of thought but only of reactive abilities.

Cognition Evolution

The beginning ...

For life to exist it had to have control of its ingredients, this means it had to form a stasis of molecules inside a membrane. The membrane protects the inner chemical reactions and allowed the bacteria to have some kind of repeatable biological reactions. This wall has to have micro holes to let in nutrients and expel waste and also by default, it could “sense” other bacterial presences and organic molecules. Touch must have been present very early by the use of proton avalanches and light sensitivity followed. This would have been the start of communication to the outside world and other life.

As multi-cellular organisms (eukaryotic) came on the scene, the level of dependence increased rapidly and the organisation of cells demanded more sophistication in control of its needs and design. The encapsulation of mitochondria into the cell, powered the organism to be more active but also needed more food to sustain the increased metabolism. This set the path for higher life forms and its associated refinement.

It could be said that cognition was not some miracle that happened to humanity but was a gradual evolution in control and communication of reasoning and brain development. Our embryos give us a clue how the brain grows from being a neural tube to being a large collection of interconnected specialised cells that form a virtual representation of a world that exists outside it. As the organism moves mainly in one direction it would naturally put its main sensors at the front so as to pick up the good and bad early. With greater sensory organs, the need for greater command and control and hence neural cell population and diversity expanded. This growth not only increased the girth of the brain but tacked on new additions to the frontal area. It makes it possible to equate the newness of an area with its position. An exception is the so called optic nerve, which is actually part of the brain. As the eyes had to be at the front, the related area of sensor and imaging was ancient and therefore further back. This was solved by a neural tube stretching to link the eyes.

Next Generation Compute -

Graphcore and the IPU

The Colossus™ MK2 GC200 IPU

The IPU is a completely new kind of massively parallel processor, co-designed from the ground up with the Poplar® SDK, to accelerate machine intelligence. Since their first generation Colossus IPU, they have developed groundbreaking advances in compute, communication and memory in their silicon and systems architecture, achieving an 8x step up in real-world performance compared to the MK1 IPU. The GC200 is the world's most complex processor made easy to use thanks to Poplar software, so innovators can make AI breakthroughs.

With 59.4Bn transistors, and built using the very latest TSMC 7nm process, it is claimed to be the world's most sophisticated processor. Each MK2 IPU has 1472 powerful processor cores, running nearly 9,000 independent parallel program threads. Each IPU holds an unprecedented 900MB In-Processor-Memory™ with 250 TeraFlops of AI compute at FP16.16 and FP16.SR(stochastic rounding). The GC200 supports much more FP32 compute than any other processor. Each unit has 4 chips and 64k units can be networked together, giving super computer ability at a much lower cost.

This company is the one to watch and may be the forerunner of neuromorphic processors that revolutionise AI away from existing limited architecture. This will do away with back propogation as the 'neurons' are weighted in-situ.

Memberships/Certification

 

memberships:

 

accreditation:

certs certs certs certs

certs certs certs certs

certs certs certs certs

certs certs certs certs

 

See me here
The future of humanity
Reality is not real
Consciousness?
Truth or travesty
Brave new normal

 

deepermind.co.uk copyright ©2020 All rights reserved