Intelligence?

AI

Google's Deepmind AlphaGo victory over traditional Go champions was a stunning exhibition of how far machine learning has come. It's use of unorthodox and persistent winning algorithms showed the advancement of mathematical logic. It's AlphaFold (protein structure prediction) is even more amazing.

Common goals

Although organic and synthetic entities are governed by their physical attributes they still have the same problems and targets to achieve. Both exist in the same world and both need methods to further their aims and survival. The ultimate shared final objectives are to carry out a required action, whether it is planned or homeostasis. Existing long enough can be a prerequisite in its own right. The high level heuristics depend on the receipt of data from external and internal sensors which stimulate some reaction from effectors that can be excitatory or inhibitory, these balance each other to form a timed neutrality. The two states of conscious and unconsciousness are different in their approach to actions, in AI this would be foreground and background processing but still similar in concept. The aware thoughts are likely to originate in the fore brain outer cortex while the default brain is located in the inner medial area. In both organic and complex synthetic entities, there is no single ‘idea’ but many competing possible alternatives. Intelligent agents can operate independently and combine to form a more complex procedure. This allows parallel actions to speed up the final answer. In humans the hardly understood conscious decision making is made at the highest level whilst the unconscious decisions are made at a lower level, often locally. Synthetic entities also have to distinguish between house keeping and making a decision that affects the whole outlook.What both have in common is that they are virtual systems and rely on sensors to access the external world. These views allow scenarios to be created to compute possible outlooks and actions. The ability for synthetic systems to beat humans in certain games and other closed rule based endeavours has fanned publicity that they will take over the world. History has shown that isruption works for awhile and then its advantages become the norm and things settle down. One of the more interesting events has been when humans use fairly modest computers to outperform sophisticated machines. What we offer is direction and novel ideas that the synthetics lack. Of course machines might break away from their rule based straight jacket but this could be a long way off and even then we still have millions of years of evolutionary design behind us. Whether we become augmented cyborgs or human/machine teams this seems to be the most advantageous way to use this brave new technology.

Progress is always a two edge sword. The benefits of technology can be used to improve or decrease its advantages to the environment and society. The increase in the scope of available soft and hard power impacts on privacy and destruction. The threat of an AI killer robot is much more unlikely than the insidious drip drip of central control bleeding into politics and democracy. We should not create things because we can but because it has merit.

Learning and other algorithmic based solutions have received a lot of media attention which stretches from the helpful to the ridiculous. The development so far has been two dimensional and uses the speed of the chip and unlimited resources to back up an apparent front end of encroachment on human abilities. For those of us who have studied both machine learning and neuroscience it is apparent that humans and machines are very different entities. The brain is a collection of highly adapted cells which are modified to carry out specific tasks and integrated throughout the brain and body. The synthetic approach so far has been to use brute force to overcome the weaknesses of artificial design. The latest developments have sought to use a quasi neuronal type of inherent weighting to simulate the human brain. The question should be as always, is what are we trying to do here? If we are trying to out compete the human brain then we better get ready for a long wait.

One area being investigated is the use of robot swarms. This envisages small to large size communities of interlinked robots moving by various means in co-ordinated manoeuvres to obtain a goal. They may have an autonomous action controlled by a central or hive type capacity. Some also suggest they may replace declining bees as pollen spreaders. Although saving the bees could be a better alternative.

The effect of "AI" in medicine cannot be overstated. Its use in predictive and analytic outlooks using the enormous available data is changing the landscape in genetics and therapy. Robotic surgery whether local or remote offers a more reliable and detailed ability to improve the outlook for patients. The ideal scenario would be the increased training and lower costs for these benefits to become universal.


Alphafold

Although DNA has been lauded as the basis of life, it is the proteins that do all the work and often more complex than RNA strands. Like genes, proteins need to unfold into a natural structure that exposes their active amino acids for production of chemical actions. This structure is very important and bad unfolding also has bad results. Alphafold predicts folding and is a very important part of research into illnesses, both for understanding and possible cures. The Deepmind team used an AI system they have designated as GDT-net, a neural network architecture that may be fully released.

This is a major step in medical advancement and the Google/Deepmind team deserves all the plaudits it will have showered on it.

Non-ANN Machine Learning Algorithms

1. Linear Regression

To understand the working functionality of this algorithm, imagine how you would arrange random logs of wood in increasing order of their weight. There is a catch; however – you cannot weigh each log. You have to guess its weight just by looking at the height and girth of the log (visual analysis) and arrange them using a combination of these visible parameters. This is what linear regression is like.

In this process, a relationship is established between independent and dependent variables by fitting them to a line. This line is known as the regression line and represented by a linear equation Y= a *X + b.

In this equation:

  • Y – Dependent Variable
  • a – Slope
  • X – Independent variable
  • b – Intercept

The coefficients a & b are derived by minimizing the sum of the squared difference of distance between data points and the regression line.

2. Logistic Regression

Logistic Regression is used to estimate discrete values (usually binary values like 0/1) from a set of independent variables. It helps predict the probability of an event by fitting data to a logit function. It is also called logit regression.

These methods listed below are often used to help improve logistic regression models:

  • include interaction terms
  • eliminate features
  • regularize techniques
  • use a non-linear model

3. Decision Tree

It is one of the most popular machine learning algorithms in use today; this is a supervised learning algorithm that is used for classifying problems. It works well classifying for both categorical and continuous dependent variables. In this algorithm, we split the population into two or more homogeneous sets based on the most significant attributes/ independent variables.

4. SVM (Support Vector Machine)

SVM is a method of classification in which you plot raw data as points in an n-dimensional space (where n is the number of features you have). The value of each feature is then tied to a particular coordinate, making it easy to classify the data. Lines called classifiers can be used to split the data and plot them on a graph.

5. Naive Bayes

A Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature.

Even if these features are related to each other, a Naive Bayes classifier would consider all of these properties independently when calculating the probability of a particular outcome.

A Naive Bayesian model is easy to build and useful for massive datasets. It's simple and is known to outperform even highly sophisticated classification methods.

6. KNN (K- Nearest Neighbors)

This algorithm can be applied to both classification and regression problems. Apparently, within the Data Science industry, it's more widely used to solve classification problems. It’s a simple algorithm that stores all available cases and classifies any new cases by taking a majority vote of its k neighbors. The case is then assigned to the class with which it has the most in common. A distance function performs this measurement.

KNN can be easily understood by comparing it to real life. For example, if you want information about a person, it makes sense to talk to his or her friends and colleagues!

Things to consider before selecting KNN: 

  • KNN is computationally expensive
  • Variables should be normalized, or else higher range variables can bias the algorithm
  • Data still needs to be pre-processed.

7. K-Means

It is an unsupervised algorithm that solves clustering problems. Data sets are classified into a particular number of clusters (let's call that number K) in such a way that all the data points within a cluster are homogenous and heterogeneous from the data in other clusters.

How K-means forms clusters:

  • The K-means algorithm picks k number of points, called centroids, for each cluster.
  • Each data point forms a cluster with the closest centroids, i.e., K clusters.
  • It now creates new centroids based on the existing cluster members.
  • With these new centroids, the closest distance for each data point is determined. This process is repeated until the centroids do not change.

8. Random Forest 

A collective of decision trees is called a Random Forest. To classify a new object based on its attributes, each tree is classified, and the tree “votes” for that class. The forest chooses the classification having the most votes (over all the trees in the forest).

Each tree is planted & grown as follows:

  • If the number of cases in the training set is N, then a sample of N cases is taken at random. This sample will be the training set for growing the tree.
  • If there are M input variables, a number m<<M is specified such that at each node, m variables are selected at random out of the M, and the best split on this m is used to split the node. The value of m is held constant during this process.
  • Each tree is grown to the most substantial extent possible. There is no pruning. 

9. Dimensionality Reduction Algorithms

In today's world, vast amounts of data are being stored and analyzed by corporates, government agencies, and research organizations. As a data scientist, you know that this raw data contains a lot of information - the challenge is in identifying significant patterns and variables.

Dimensionality reduction algorithms like Decision Tree, Factor Analysis, Missing Value Ratio, and Random Forest can help you find relevant details.

10. Gradient Boosting & AdaBoost

These are boosting algorithms used when massive loads of data have to be handled to make predictions with high accuracy. Boosting is an ensemble learning algorithm that combines the predictive power of several base estimators to improve robustness.

In short, it combines multiple weak or average predictors to build a strong predictor. These boosting algorithms always work well in data science competitions like Kaggle, AV Hackathon, CrowdAnalytix. These are the most preferred machine learning algorithms today. Use them, along with Python and R Codes, to achieve accurate outcomes.

By Simon Tavasoli

ANN types

  • Multilayer perceptron (MLP).
  • Convolutional neural network (CNN).
  • Recursive neural network (RNN).
  • Recurrent neural network (RNN).
  • Long short-term memory (LSTM) .
  • Sequence-to-sequence models.
  • Sequence-to-sequence models.
  • Generative adversarial network (GAN).
  • Shallow neural networks.
  • And new ones developing all the time .



See me here

Copyright © 2020 deepermind.co.uk | Maldwyn Palmer