
First, you need a definition – what is AI? How do we know that something is “intelligent”?
As humans, we tend to associate intelligence with problem-solving. For artificial intelligence (AI), there is, famously, the so-called Turing Test, developed by the British mathematician Alan Turing in the 1950s. If a machine called you on the telephone would you know it’s a machine? For most of the history of AI the answer would have been “yes” but lately it’s been leaning more toward a “maybe.” Once the answer is “no”, the machine is said to have passed the test, marking a true computer intelligence.
One currently accepted definition of AI is, “the ability of computer systems to reason, discover meaning, generalize, or learn from past experience” (Encyclopedia Britannica). It is one of those ideas we all think we understand – until we’re asked to explain it. It appears in many guises – expert systems, neural nets, machine learning algorithms – and is embedded in many of the technologies we now encounter in our daily lives.
The term “artificial intelligence” was itself coined in 1956 at a conference at Dartmouth University. From the start, there was broad agreement on the potential, but no way to really put it to work with the rudimentary (and expensive) computing power then available. True utility awaited the advent of more powerful computers, advanced algorithms, and access to huge data sets.
Ebb and flow
Interest in, and funding for, AI has ebbed and flowed over the years. The early excitement over its possibilities was followed by what came to be known as the first “AI Winter” which lasted from 1974 to 1980 as funding again dried up. (There was a second AI Winter in 1987-1993.)
But behind the scenes a powerful engine continued to advance the science: Moore’s Law – the observation that the speed and capability of computers doubles about every two years even as costs fall. AI capabilities expanded in step, with the development of “expert systems” intended to mimic human problem solving, “neural networks,” and the successful application of “machine learning” to a broad range of problems.
Inevitably, AI would find its way into the world of investing, with algorithmic and high frequency trading, among early applications. It’s now rolling out in products designed for financial advisors and their clients, including exchange traded funds (ETFs). Our BTD Capital Fund (NYSE: DIP) is the latest example. An actively managed ETF, the fund has been made possible by the same advancements that are driving AI growth generally – faster computers, better algorithms, and the ability to analyze massive amounts of data in nanoseconds.
But going back to first principles, we aren’t so much looking to discover meaning as we are to uncover patterns, many of which may not be apparent to a human observer. We are solving a problem, in this case identifying the best candidates for a “buy the dip” strategy in a process based on access to extensive market data gathered from multiple proprietary feeds spanning more than 15 years. Dynamic machine learning and proprietary algorithms allow the AI to “learn” in real time, as it seeks to rapidly identify short-term price declines in individual stocks that appear likely to mean revert.
AI has tracked a path familiar to many new ideas, starting with the impossible, proceeding to the improbable, and ending over time with the inevitable. Investing is a great use case where AI’s analytical speed and ability to process massive amounts of data can create an edge. That’s what we seek to do with DIP.
To find out more about how we do that, and how you (and your clients) can invest, go here.