Thursday, 12 November 2015

Ashes to ashes, Dust to dust

In this day and age, whenever the term "artificial intelligence" is mentioned, there's a good chance that most minds immediately think of Skynet from Terminator, the malevolent AI from The Matrix, or David from A.I.: Artificial Intelligence. Most people have a skewed perception of AI technology, chief of that being total annihilation of human kind or complete global destruction. The irony of reality however, is that AIs lack the self-awareness that compels all living things: survival.

At any rate, artificial intelligences have greatly benefited mankind's foray into technology. We have progressed leaps and bounds in various fields, our focus being its military, medical, and business applications.

Traditional military warfare centered around forming battlefield strategies, old men meeting in the war room to make decisions, and sending young men to war and perish. Traditional wars are usually drawn out over a long period, usually measured in the span of months or years. And death tolls pile up with every passing minute. Artificial intelligences take the utilization of humans out of the equation altogther. A chilling downside to the operating processes of AIs is that they have no semblances of emotions to guide them, meaning their neural pathways operate on a "get from Point A to Point B" basis. The loss of 1000 lives would be utterly and completely insignificant in the face of an AI. The most effective way to work around the problem is to have a human at the helm to make the macro decisions, and leave field control and micromanaging to the artificial intelligence.

The threat of war aside, artificial intelligences make good companions to doctors as well. As mentioned in a medicine-based article earlier, a swarm of nano-scaled robots can be programmed to swim through a patient's bloodstream to locate the source of a potential infection or disease (for example, cancer cells or internal injuries), then eliminate the source altogther. The nanobots are linked together with a rudimentary swarm network, and with the assistance of emergent-based programming, the nanobots are able to adapt to any obstacles that they may face. This is akin to real world predator-prey tactics, where predators will not stop chasing after prey until they have caught it or lose track of it, in which case they would try to reacquire the target again.

But why stop at saving lives? Why not profit via the assistance of AI as well? The stock market is volatile and unpredictable, and naturally, someone came up with the idea of applying mathematical algorithms to help keep track of stock updates. The Internet age ensured that business data was laid bare for the entire world to peruse, and a massive amount of data accumulated on the Internet in a short amount of time. Big data was born, and companies started adopting the Big Data approach, where seemingly unrelated pieces of data or unstructured data could be used as a predictor model to help analyse future trends, which in turn could boost company profits. To assist business analysts in making sense of the data, artificial intelligences can be used to gather and summarize the data via a neural networking process, effectively speeding up the decision making processes of the business analysts.

Now, we must ask ourselves these questions: Should an artificial intelligence be made aware of its pseudo-sentience? Should they be made self-aware someday? Should they be accorded the same rights and principles as a human being? These questions might seem fanciful, meaningless even, but in the not too distant future, where artificial intelligences run companies or take care of basic human needs, then such questions might become relevant again

As a closing note, artificial intelligence are simply as it's name states: Artificial. With our current technology, it is simply not advanced enough to form ethical or moral judgments that are beyond its programmed parameters. But in time, when processing power is no longer an issue, they could very well be considered one of us. Think about it: if people from various nations around the globe could co exist in harmony (petty war aside), can't we befriend an AI as well? Here's a quote from the movie A.I. Artificial Intelligence, spoken by the narrator of the movie:

"Those were the years when the icecaps melted due to the greenhouse gases and the oceans had risen and drowned so many cities along all the shorelines of the world. Amsterdam, Venice, New York forever lost. Millions of people were displaced. Climate became chaotic. Hundreds of millions of people starved in poorer countries. Elsewhere a high degree of prosperity survived when most governments in the developed world introduced legal sanctions to license pregnancies. Which was why robots, who were never hungry and did not consume resources beyond those of their first manufacture were so essential an economic link in the chain mail of society."

Written by Thinesh, Kaza, Sim Zheng Chi, Khoo Foo Sheng and Tan Benwu
  

AI Entrepreneurship: Future or Dealbreaker?




Ever seen businessman and women crowding in Wall Street, waiting patiently every morning for stock options to be updated? Ever seen the looks of elation, or the looks of sorrow on the faces of the stockbrokers as the stocks are updated? 

The stock market is a volatile thing, one subjected to the whims and fancies of the business world. It is dependent on luck to a certain extent. The data that's available on hand is simply not enough for a human to make a 100% accurate judgement about the immediate future.

With the emergence of artificial intelligence, bad judgements could be a thing of the past. Computers have already been proven to efficient at crunching numbers and processing data, but they lack the conscious drive that compels humans to, in a nutshell, Artificial intelligences, on the other hand, able to connect the dots between separate types of data, and provide concrete predictions and conclusions.

The recent emergence of Big Data might be the saving grace that could force a paradigm shift in the way we analyse data. One company, Narrative Science, came up with an app that could synthesize available data, and interpret it in an easily understandable form. Their end product is a program called Quill, and it is able summarize the data at hand and deliver it to the end user in the form of a story. The CIA seems to have taken an interest in the project, as they have backed the development of Quill as well. (Lapowsky, 2014)

As we can succinctly observe, our current world is immensely data driven. With the rise of artificial intelligence, however, comes a very real risk: layoffs. (White, 2015)

The business world has traditionally been very human-dominated; i.e. humans have the final say in investing or liquidating stocks as they see fit. To assist in their decisions, jobs such as financial analysts, stockbrokers were created to observe the trends of the stock market. Now what happens when an AI takes over the analytics and decision-making process? Suffice to say, the traditional job scopes will swept away, dwarfed by the immense data-crunching capabilities of an AI, and despite what would a definite increase in stock values, there will be a great number of people who will lose their jobs. 

In a nutshell, the decision to utilize an AI in the place of the traditional roles of entrepreneurship will have be trodden upon with great care, as a significant number of aspects in life will be replaced by the onset of an AI-driven economy. Now here's a point to ponder: Imagine every company having an AI system of their own to run their business processes. If all the AIs run at the same rate of intelligence, will it cause the economy to regress into a state of stalemate? In layman terms, what happens when an unstoppable object meets an immovable force? (484 words)

Written by Thinesh and Kaza

References:

Lapowsky, I.(2014). Inc.com. 4 Big Opportunities in Artificial Intelligence. [Online] 28 January 2014. Available from http://www.inc.com/issie-lapowsky/4-big-opportunities-artificial-intelligence.html [Accessed 10 November 2015]

White, B.(2015). Forbes. Artificial Intelligence Is Already Here, But Is Your Business Ready For It? [Online] 6 April 2015. Available from http://www.forbes.com/sites/theyec/2015/04/06/artificial-intelligence-is-already-here-but-is-your-business-ready-for-it [Accessed 10 November 2015]

Sunday, 8 November 2015

Catalyst or Reactive? The Next Global Arms Race




Calling back to the previous blog post, there was an excerpt on the ethics of entrusting military hardware to the calculated confines of artificial intelligence. Artificial intelligence lack a crucial factor that is ever present in all human beings: the sentience to form judgments based on emotions.

Battles with artificial intelligences are not fought over the span of months or years like most conventional warfare scenarios. AIs will choose the most obvious route to victory, which typically means destroy every facet of the enemy until none are left standing. The deployment of weapons of mass destruction is a given, and the death toll will be unthinkable.

Ever watch Captain America: The Winter Soldier? In the movie, lies a concept called Project Insight: It basically harnesses the power of Big Data by using a person's past behaviour to predict their threat level in the future. It's a fanciful concept, but surprisingly grounded in reality. All the AI has to do is to assess a person based on predefined criteria, with no regards to social statuses and what not, and detain the person if he/she constitutes a threat, with no human intervention whatsoever. This has the effect of creating paranoia among the populace, as people will constantly fear for their lives, and repentant criminals might be wrongfully detained and incarcerated without any due notice and reason.

If humanity values its progress and achievements, there comes a time where we have to step back, and contemplate the validity and necessity of our research and development projects. Is it safe? Will it be beneficial or detrimental in the long run? These are questions to ponder upon before we do something that would cause us great regret and misery, decades, centuries and millenia later. (297words)

Written by Thinesh and Khoo Foo Sheng

References:

Shandrow, K.L.(2015). Entrepreneur. Elon Musk, Stephen Hawking Warn That AI Military Robots Could Ignite The Next Global Arms Race. [Online] 27 July 2015. Available from http://www.entrepreneur.com/article/248872 [Accessed 7 November 2015]

Friday, 6 November 2015

The Ethics of Military Artificial Intelligence


Humans have dominated the fields of war since recorded history, and there are no signs of all conflicts grinding to a halt anytime in the near future. The only logical limit to the fine strategic nuances of war seems to be the creative mass-murdering capabilities of the human mind. All that, however, will change soon,

See, the human mind operates within an emotional framework (which is, to say, our thoughts are somewhat guided by our emotions). The human conscience prevents us from becoming cold, trigger-happy gunners. Machines, on the other hand, are not bound by such parameters, as they operate based on hard logic.


The very prospect of handing over the controls of a nation's war machinery to a governing artificial intelligence might seem tantalizing at first, given that the hard decisions are made by the machine, not man; but take a walk a little longer down the road and the ramifications of a machine-led war quickly become obvious. The very concept of a death toll would seem meaningless to an AI, and ceasefires would not be called by either parties due to an AI not being to comprehend the values of defeat and/or living to fight another day.


At any rate, the control of lethal weapons should stay in the hands of sentient, living beings, capable of making moral judgments. Current artificial intelligence technology simply does not have the processing power nor the expertise to accurately model human behaviour. Some aspects of technology are indeed best left to the imagination. (257 words)

Written by Kaza and Thinesh

References:


Knight, W.(2015). Military Robots: Armed, but How Dangerous? MIT Technology Review [Online] August 3 2015. Available from http://www.technologyreview.com/news/539876/military-robots-armed-but-how-dangerous/. [Accessed 5 November 2015]

Wednesday, 4 November 2015

The Future of Surgeons?



Earlier post was concerning about healthcare and how Artificial Intelligence can tap into those areas. What if after years of focusing and finally human beings has come up with an artificial intelligence doctor. Human beings can make error as it is apart of their life but imagine a robot that is able to do calculations in seconds in order to determine the correct drug dosage to be given to a patient without any risk later on just by using their super computer brain.

From the perspective of the chairman of public health at University of Warwick, Professor Richard Lilford thinks that this wont happen and even if it happens the Artificial Intelligence will only be used as a second opinion or even first opinion. However the final call will definitely be made by the doctor themselves. It is undeniable that computers will produce effective results, but concerning towards medical field especially dealing with patients who are ill or terminally ill by having a real life doctor in those circumstances is really comforting for the patients due to the supportive human characteristic which is indeed vital. With an Artificial Intelligence doctor, patients would not be able to get such comforting support.

"It’s lethal to think that you can separate the psychological care from the physical care. They are part and parcel of the same thing" (Powell, 2015).

For now the advancement of technology in the medical field with the use of Artificial Intelligence has brought human kind to a lot of opportunities in variety of methods on treating, detecting and preventing diseases. For example the Deep Genomics which is the combinations of machine learning techniques fuse with Artificial Intelligence which allow to study the human genome. In a way it is able to predict either that person will get a certain disease or they might not get it. This Deep Genomics is different because it has access to a database of 300 million variety of diseases. As any other analysis it also uses algorithms to find out on how an individual's mutation would cause a problem in the future.

In the words, it safe to say that the future for surgeons around the world in safe for now as having that personal human-to-human interaction plays an important role in treating and handling patients. (386 words)

Written by Kaza and Khoo Foo Sheng


References:

Powell, J. (2015), The Telegraph. Do robots feature in the future of medicine? [Online] 15 October 2015. Available from http://www.telegraph.co.uk/sponsored/education/festival-of-the-imagination/11921755/will-robots-take-over.html. [Accessed 3 November 2015]

Keshavan, M. (2015), MedCity News. Using deep learning and artificial intelligence to map the genome and predict disease. [Online] 22 July 2015. Available from http://medcitynews.com/2015/07/deep-learning-artificial-intelligence-genome/. [Accessed 3 November 2015]

Tuesday, 3 November 2015

Intelligence Healthcare

Concerning the area of medicine, how far has the technology in this particular field has help human beings? Are we in the state where we have artificial intelligence robots performing surgery on patients? Not yet, but what we have are artificial intelligence that can assist people in their healthcare. 



According to Eugene Borukhovich (2015), there are 5 areas where the Artificial Intelligence can actually advance more in the future. Firstly using the Artificial Intelligence to be a predictor in the drug resistance. Secondly utilizing Artificial Intelligence to to fully support faithfully towards medication, for example AiCure, uses Artifical Intelligence integrated with the smartphone to check whether the person has taken their medicine or not. Thirdly is the development of smart drugs with Artificial Intelligence, currently three major companies are collaborating with each other IBM, Johnson & Johnson and Sanofi. They are teaching the supercomputer to analyze and truly understand the scientific outcomes of each clinical trial that was conducted, so based on the results they could then help to further develop better drugs and event treatment. By doing so they are able to see clearer pattern and help physicians to make better decisions when giving out medicines to the right patients that would eventually help those patients by minimizing the side of effect but also at the same time maximizing the effectiveness of it. Fourthly is concerning Alzheimer's patients, Artificial Intelligence that are able to support and give a better quality of life towards Alzheimer's patient. Turns out Department of Computer Science from the University of Washington has already explore in this particular area. The system would utilize Artificial Intelligence technology where it will help to substitute part of the memory and also skills like problem solving that was lost along the way due  to being an Alzheimer's patient. Lastly, a wearable health technology. A general manager from Microsoft, Zulfi Alam will use the Internet of Things (IoT) applications by collecting data. From that data an algorithm will be created where it will have all the knowledge of the users bio-metrics. Moreover, they are able to recognize patterns and eventually to give the user a chance to improve their health. (362 words)

Written by Kaza and Thinesh


References:

Borukhovich, E. (2015), Health Works Collective. Artificial Intelligence for Healthcare: Are We There Yet?. [Online] October 20 2015. Available from http://www.healthworkscollective.com/eborukhovich/318831/artificial-intelligence-healthcare-are-we-there-yet. [Accessed 2 November 2015]

Hernandez, V. (2015), International Business Times. 5 artificial intelligence applications in healthcare to dispel fear of robots with AI. [Online] October 28 2015. Available from http://www.ibtimes.com.au/5-artificial-intelligence-applications-healthcare-dispel-fear-robots-ai-1478480. [Accessed 2 November 2015]

Friday, 16 October 2015

Computerized Data Analyst

Previous post introduce about the big data that is now a big trend. From all of these big data, finally emerge the data scientist or data analysis. With Business Intelligence taking over companies, more and more complex dashboards are design. This leads the companies to hire data analyst to basically explains all the data that is shown, informing and what kind of conclusion that can be draw from those data. It is not a surprise that in when it comes to data analyst, the world is facing shortage of these kind of workers. Besides that, even though there are data analyst that is being hired or people working as data analyst after graduating not many of them can make a proper decisions based on the data that they have collected and tested. So what do companies do to address such problem? They are turning to Artificial Intelligence for help.



For example "Yseop Smart Business Intelligence" is a software that was created by the Yseop enterprise company that focus on Artificial Intelligence. How does Yseop Smart BI works? So with powerful tool and software it is able to analyze and explain data very quickly. Moreover it is able to turn data into narrative, so not only does it explains what the whole data is about but it also informs the user what kind of actions to take and why those actions need to be taken. Not only that, this software is the by far one of a kind in the current market that is able to explain clearly all of its reports and findings in multiple languages in real time. The current languages that the software support are English, Spanish, French and German. It is powered by the Natural Language Generation known as NLG. The image below shows how Yseop Smart BI works. (305 words)


Written by Kaza and Thinesh


References:

Yseop Smart Business Intelligence & Reporting Software. (2014). Yseop Smart Business Intelligence [Online]. Available from http://yseop.com/EN/smart-business-intelligence. [Accessed 15 October 2015].

Bridges, T. (2014), Rude Baguette. [Interview] Yseop's revolutionary approach to turning data into intelligent text. [Online] August 18 2014. Available from https://www.rudebaguette.com/2014/08/18/interview-yseops-john-rauscher-turning-data-intelligent-text/. [Accessed 15 October 2015].

Thursday, 15 October 2015

Data Mining



Data mining is a field mostly concerned with extracting information from a vast amount of data. Google Search is the best example of such a system taking an analysis step of knowledge discovery in the database process. It is a computational process which analyses a pattern of searched result involving large set of data also known as "Big Data". Besides, all search applications like question answering systems (SIRI) involve method of interaction between artificial intelligence and machine learning. The main objective of data mining procedure is to extract information from a data set a change it into a meaningful structure for utilization. 


Another example is Association Rule Mining/Pattern Mining being applied in Retail Industry to mine buying behavior of consumers from vast amount of historical buying behavior. (129 words)

Written by Sim Zheng Chi


References:

Pena Fernando, 1988. Artificial Intelligence is Coming. European Management, 6(2), pp. 174-177.

Clark Holloway, C. o. B. A. C., 1983. Strategic Management and Artificial Intelligence. Long Range Planning,, 15(5), p. 89 to 93.

Machine Learning






Machine learning is a sub-field of computer science that advanced from pattern recognition and computational learning theory in AI. Machine learning study and develop calculations or algorithms that can gain from and make prediction on data. In machine learning, there are many algorithms used such as classification, clustering, regression, feature selection, and collaborative filtering/ recommendation systems etc. Many database and decision support systems are developed using these techniques for example finance, retail, and E-commerce. The recommendation system of Amazon.com is a fine example of machine learning because it can identify how the customer behave in buying therefore it give more opportunity for customers to search as similar recommended product will appear. Besides, Amazon developed its own algorithm called item-to-item collaborative filtering. The main function of this algorithm is instead of discovering neighborhood users who have the same interest on items, it finds the items acquired to be purchased together with the selected item. Among these items, customer can find item similarity because Amazon has statistics on what percentage of people will buy. (174 words)


Written by Sim Zheng Chi


Reference:

Greg Linden, B. S. a. J. Y., 2013. Amazon.com Recommendations Item-to-Item Collaborative Filtering, s.l.: IEEE INTERNET COMPUTING.

Wednesday, 14 October 2015

Business Intelligence



Business intelligence (BI) is additionally the following innovation that were utilized as a part of the business which transform raw information into important data. The activity is other words called data mining, which helps business administration to make precise choice taking into account the data given. As the expanding utilization of business intelligence, organization can distinguish their client behavior and allows them to target business strategies and make precise decision.

Although BI is focuses on business goal and could not compare to data mining and machine learning, however they are powerful to AI to help business to growth because BI can produce valuable result for decision maker. Google Analytics is one prominent example of AI system. Mostly business reports and SWOT analysis comes under Business Intelligence. (128 words)

Written by Sim Zheng Chi

Reference:

Singh, R. K., 2014. What is the difference between artificial intelligence, machine learning, data mining and business intelligence? How they are related?. [Online] Available from https://www.quora.com/ [Accessed 1 October 2015].



Saturday, 3 October 2015

Nano Healers






The brain is an intricately designed force of nature. Artificial intelligence technologies attempt to replicate the functions of human brain by mimicking its various neural pathways, essentially, thinking like a human does.

In the medical field, artificial intelligence is able to process vast amounts of information such as the patient’s medical history, immediate family history, environmental data, and even world population data, and identify patterns that were seemingly hidden from human view. Harnessing the power of big data, artificial intelligences can skim through an unprecedented swath of data to obtain and analyse the results.

There are a variety of applications for medical-based artificial intelligences. A technology currently in development involves injecting nano-robotic-particles (nanobots) into the blood stream, and utilizing it to eradicate harmful microparticles that resides within the body, such as bacterium, tapeworms and potentially even viruses. The artificial intelligence communicate within a neural network framework, essentially improvising and evolving its strategies while performing within its target parameters.



The underlying principles of medical nanotechnology is closely related to swarm AI technology, which also has its uses in the military. Like ants in a colony, these nanoparticles strive towards a common goal, akin to predator stalking its prey, all the while minimizing flaws and maximizing its bounty (or in the medical context, health benefits). To sum it up, think A Bug’s Life. (223words)

Written by Thinesh, Kaza and Tan Benwu

References:
Xining,H.(2015).MedTech Boston. Swarm Robotics: The Future of Medicine? [Online] 6 October 2015. Available from https://medtechboston.medstro.com/swarm-robotics-what-you-need-to-know-about-the-future-of-medicine/ [Accessed 2 October 2015]

Sunday, 27 September 2015

Swarm Intelligence




Anyone read the book “Prey” by Michael Crichton? The story describes an artificial intelligence experiment gone awry, where the computing power of a swarm of nanobots increasing exponentially at an accelerating rate, and pose a threat to all living things. Sounds scary, doesn’t it?

Well, you can rest easy for now, as the aforementioned scenario is highly implausible with existing technology (keyword here being existing). It does, however, have it roots with current military and aerospace technology.

In loose terms, swarm intelligence details the process of complex, intricate behaviours that emerge from a substantial amount of individual program sets following simple rules. To phrase this in layman terms, imagine a colony of termites. Somehow, somewhere along the line, the termites figured out the best approach to gathering food sources and constructing hives. No leader gave an order to carry out any of the aforementioned activities; it just happened.

How would this be applied to existing military strategies? Well, imagine a group of tanks defending the border of our country. Now, let’s take a step further, and say that all 20 of those tanks were unmanned, and relied on swarm artificial intelligence. On their own, the tanks can detect rapidly approaching threats and neutralize them with extreme prejudice; but when supplied with aerial data from unmanned drones and fed with data from geosynchronous satellites, the country’s borders would effectively be impervious to attack, as the networked intelligence linked to both the ground and aerial vehicles provide an unprecedented amount of data that can be analysed and utilized by the defense forces.



Too much to digest? Well, think Terminator, where we are the humans (well duh), and Skynet are the robots (what else would they be). We would be curb-stomped, and slaughtered. Arnold Schwarzenegger would be proud. (297 words)

Written by Thinesh and Kaza

References:
Tucker,P.(2014). Defense One. Inside the Navy's Secret Swarm Robot Experiment. [Online] 5 October 2014. Available from http://www.defenseone.com/technology/2014/10/inside-navys-secret-swarm-robot-experiment/95813/ [Accessed 26 September 2015]

Friday, 11 September 2015

The paradigm shift of the 21st century - ARTIFICIAL INTELLIGENCE






The human brain could very well be considered the most powerful computer in the world. (1) The brain’s biological intricacies are wired in such a way that it would be nigh impossible for a machine to emulate its computing architecture and power. Despite all the power that human brain possesses, however, we lack a certain focus and our performance is governed by our emotional states. It was then that we realized we needed a substitute for the human brain.

Cue ARTIFICIAL INTELLIGENCE.


In the video below André LeBlanc explains the current and future impacts of Artificial Intelligence on industry, science, and how it will benefit and accelerate human progress.


In summary, André LeBlanc muses that technology is being improved at an exponential rate, and Artificial Intelligence will soon dwarf all of humanity in its thinking and reasoning skills. (148 Words)

Written by Thinesh and Kaza

Reference:

Artificial Intelligence and the future, 2015 (video file). Available from https://www.youtube.com/watch?v=xH_B5xh42xc [10 September 2015].