This article forms an introductory discussion on the upcoming concept of Artificial intelligence. This article is not a discussion aimed toward various tools, techniques, and available technologies for AI but defines what intelligence is and how one could identify artificial intelligence.
The core of this article is around defining intelligence and the Turing test. The article also brushes topics around intelligence vs consciousness and concludes with an understanding of general AI.
Most of this article will involve thought experiments and present postulates on the case of machine intelligence.
Let us start with an argument presented in the book “Pensées Philosophiques” by “Denis Diderot” “If they find a parrot who could answer to everything, I would claim it to be an intelligent being without hesitation.”
If this argument is to be extended to machines would a machine that would answer everything as a human being would be intelligent?
If this is intelligence then is human and computer intelligence the same or is computer intelligence a simulation? Vs Is a human brain a computer?
Would a machine have consciousness and would it be able to feel? Is consciousness required for intelligence?
A final aspect to interpret intelligence is around dualism, it debates if the mind or intelligence is physical or if it has non-physical components.
These questions remain largely philosophical and are open to interpretation. They remain as yet unanswered and open to large interpretations.
Allen Turing was one of the first pioneers in the field of machine intelligence with a belief in intelligence being physical. He proposed “If a machine behaves as intelligently as a human being, then it is as intelligent as a human being”
This brings us to the definition of an intelligent machine vis A machine that can solve all problems a human can, this also forms the scope of artificial intelligence. To achieve this one needs to precisely define all aspects of learning and other features of intelligence so that a machine may be able to simulate it.
While the above definitions bring us a step closer to comparing machine intelligence and human intelligence they lack precise mechanisms for comparing the same.
Allen Turing in a paper in 1950 reduced the problem of defining intelligence into a simple question of conversation. The essence of this test is if a human interrogator is talking to either a human or a computer behind closed screens and is unable to distinguish between the human and the computer then the said computer is intelligent.

The consequence of this paper essentially boils down to if a machine can answer to a human as another human would then it may be considered intelligent.

One of the top criticism of the Turing test is the Chinese room test
Let us assume there exists a computer program that accepts input in Chinese and produces a response in Chinese. It further is capable of passing the Turing test and the interrogator believes that the response is from an intelligent human.
However, the question posed by the Chinese room test is does the computer program understand Chinese or is it simulating understanding of Chinese? If the machine understands Chinese then it is a Strong AI and if it doesn’t then it is a Weak AI.
This argument may be further extended to a human, let us assume that we replace the machine with a human being with an infinite library of all possible questions in Chinese with their answers, if such a human were to obtain a request and map it an appropriate answer, would such a human possess an understanding of Chinese? This brings the question of intelligence vs understanding.
There are further arguments towards the program itself in the Chinese test, what if the program in question maps and codes every neuron in a Chinese brain? Human intelligence works with a limited capacity however building programs executing such simulations virtually has infinite capacity. Would intelligence require functioning in limited capacity and resources?

While multiple arguments for and against have been made for this test for example
- Why should intelligence be defined in terms of human capabilities and limitations, should intelligence not be larger than human capacity?
- Human intelligence is driven by experience, instincts, and the unconscious mind that doesn’t follow any rules.
- The Turing test is unable to distinguish intelligence as defined by babies or children.
At the next level of the debate are Artificial consciousness and Artificial self-awareness. There is active research being done on these subjects however the subject material is beyond the scope of this article.

We wish to end this discussion to form a distinction between specialized AI and general AI. Systems such as machine learning and deep learning produce what is known as specialized AI. These systems are good at one thing they are trained to do for example playing chess or classification of images. Such a system would require to be retrained for a new class of problems.
Artificial General Intelligence is a creation of machine intelligence that could do anything that a human can do. This would involve capabilities to solve a large class of problems. This is an active area of research and requires an understanding of human intelligence before it may be synthesized.
In the world we live in computer technology is expanding at an unprecedented rate and doesn’t seem to slow down. Concepts old and new, from labs and the theoretical world, are making into mainstream consumer computing.
One such concept borrowed from the mainframe world is containers and it has taken the tech world by storm. In this article we explore containers.
To understand the concept of containers let us start with the concept of virtualization. Virtualization in terms of computer technology is the ability to create a virtual instance of a resource which might be hardware, device, operating system, and so on. This ends up covering a very large class of items that include virtualization of hardware in form of hypervisor (virtual machines) such as Oracle VirtualBox, abstract computers such as JVM and .net CLR, LPARs (dividing resources in a mainframe), and virtualizing the OS in form of container
On the face of it, these technologies may seem similar but they are not.
A Hypervisor essentially is a mechanism to virtualize the hardware. When one chooses a hypervisor they create virtual disks, CPUs, network interfaces, and others. These constitute the virtual machine. The virtual machine in turn hosts an operating system which in turn hosts the applications. Hypervisors can be of two types, Type1 or bare hypervisor run directly on the hardware. Type 2 Hypervisors require a host operating system to be installed and run on the same.
While Hypervisor try to share resources of underlying computers by providing virtual options, they still use the instruction set that of a raw machine. This allows standard software to run as is. Abstract computers like JVM or CLR essentially run inside an OS and provide a completely different instruction set than provided by the raw hardware. They are essentially programming models allowing software development to be friendly.
LPAR or logical partition is a mechanism is a technique to logically divide and provide resources of a mainframe. This allows virtualized separate computers that may host separate operating systems.
Containers are a bit different, while all the previous techniques tried to virtualize the hardware and as consequence had to install a copy of the operating system on each virtual computer. Containers virtualize the operating system. This mechanism of virtualization has many benefits.
Although we may continue our discussion in areas of various virtualization technologies, however, we would keep our interests only in the area of containers, specifically Docker.
Containers is operating system level virtualization. This is a mechanism in which the OS kernel allows the existence of multiple isolated user-space instances. They may feel like a real OS from the point of view of the container or the virtual instance but the OS will manage any competing resources. For example, each container would feel it has access to a root directory but in reality using chroot each container would be in a separate directory.
A visible consequence of this for example starting a docker container hosting tomcat
docker run -it –rm -p 8888:8080 tomcat:8.0
This means run tomcat, and we know tomcat by default runs on port 8080. While we are mapping it to port 8888 on a real machine. While in hardware virtualization no such mapping is required.
This allows multiple instances of containers to run within one OS each mapping physically to a different port.
Containers have several benefits over traditional Hypervisor. The most visible being each instance of a VM does not require a separate installation of an operating system. This in turn can save the cost of resources because memory and CPU footprint comes down, reducing the licensing cost of software like OS, antivirus, etc. This also speeds up the startup and shutdown process for no complete OS boot sequence is required. This makes launching a container an operation of a matter of single-digit seconds while VMs are in order of minutes.
Containers bring their own unique set of challenges such as the need for load balancers that can work on applications across different ports.
While in the VM world if one of the VM’s OS failed it did not bring down other instances and the use of Type 1 Hypervisor (standard in commercial cloud providers) minimizes the risk of an entire physical machine coming down. However with containers, if OS comes down, it may bring down multiple containers.
Containers like VMs require resource management and allocation to ensure no starvation occurs.
To improve the reliability of containers instances should be distributed across the separate physical OS. Multiple mechanisms exist to manage this for example CoreOS a distributed Linux OS manages containers across separate physical instances.
While containers provide very low overhead, however, one should not jump into them blindly. VMs allow a degree of isolation across machines which containers will not users managing containers would have access to all containers on a machine. Databases tend to be an area where containers seem to be hard to manage. Having that said containers are the future and are here to stay.
Enterprises should start investing in containers and take the first steps they took for VMs and clouds. This transition would however need a mindset change where DB, application, and network team would need to trust each other.
I would like to end this discussion with a quick discussion of how containers would make the deployment process seamless and in line with leading DevOps practices.
The primary benefit of a container is that guarantees configuration and software across instances. For example, a Tomcat 8 container with JDK 8 would be guaranteed across environments. There would be no instance of configuration mismatch. One could use this based image and modify it with an application of choice.
From a development and deployment strategy, the build process would prepare a container and the said container image will be deployed.
Before we finally leave this discussion we drive our discussion on a different tangent. Virtualization till now has been limited to servers, however as it stands new techniques now exist to virtualize the mobile ecosystem. This would mean a single mobile phone could have multiple mobile VMs running on them. This has far-reaching benefits such as having one phone for both homes and working with different images. Similarly, the development and testing of mobile devices can be brought down by building against a VM and having it deployed on any hardware.
Container virtualization is here to stay so start preparing!
