As many companies have realized the commercial potential of healthcare data that has remained idle within healthcare system databases, AI technology in the healthcare space has begun to expand. Patient contains valuable information that could bring drastic improvements to diagnosis, care, and operations.
Integrated healthcare systems have been collecting and storing patient data since they came into existence. Sanford Health, a large non-profit rural integrated health system has been featured on Harvard Business Review. Their massive amount of data spans across a variety of categories including: admission diagnostic, treatment, and discharge data to online activity between patients and providers.[1] They recognize the potential of their robust database and have begun working with higher-level institutions to find ways to improve the quality and reduce the costs of healthcare. In a win-win situation, health care system databases can be leveraged and utilized effectively, and partnering institutions can use previously inaccessible data to conduct further research that may bring positive impact to healthcare. Within the cognitive health realm, the Data For Good movement has changed perception on the usage of technology. With the aging population in countries around the world, AI technology (like predictive analysis), has increasingly shown positive results that can slow down dementia and reduce the negative impact of mild traumatic brain injuries with earlier and better diagnosis than traditional tests that have been in place. BrainCheck, a cognitive assessment and management device, is a good example. By taking a holistic view of cognitive functions over a whole lifetime, cognitive assessment and care can have value for concussions in teenagers, substance usage in adults, and dementia in the elderly. Up until recently, technology adoption has been slow within the healthcare space. If the mountains of data coming from large health care systems and new technology fail to converge through a mutual collaboration, groundbreaking improvements and discoveries may never come to light.[2] Tensility’s Health Data Link offers a solution to this collaboration problem by establishing private data linkages based on individual data governance and data linkage needs. With these obstacles in mind, there’s a need for the Data For Good movement to push past any privatization of data and technology in order to encourage powerful institutions and technology to work together to improve healthcare outcomes and reduce costs. [1] Hsu, Benson S., and MD Emily Griese. “Making Better Use of Health Care Data.” Harvard Business Review, Harvard Business School Publishing, 10 Apr. 2018. [2] Marshall, Phil, and Conversa Health. “Health Care Bots Are Only as Good as the Data and Doctors They Learn From.” VentureBeat, VentureBeat, 22 June 2018.
0 Comments
Data for Good is a recent development in the evolution of data science. Although this term has been around for less than a decade, the idea of social responsibility and using creative methods to combat social challenges have been around for much longer. At its core, Data for Good is the idea of using data responsibly in order to solve societal issues across a variety of industries for the betterment of the world.
Not limited to just one industry, Data for Good is applicable to a wide span of areas including public health, poverty, social justice, environment, etc. A recent initiative within this space is the smart city movement. Even though not all cities have the ability or access to the necessary data, the overall concept is embodied by the usage of data to improve public services. The partnership between New York City, Columbia University, and Data Science Institute to reduce floatable trash highlights the positive impact made by using data for social good.[1] Data for Good has also been recognized by other institutions of higher-learning: the University of Chicago offers a Data Science for Social Good Summer Fellowship to encourage and train data scientists to use their skills to take on projects with social impact in areas such as transportation, economic development, and international development. The ongoing rise in cognitive issues has sparked a philosophy that technology can be used to slow down this acceleration. Already visible within the healthcare realm, new developments in data science and AI for personalized medicine, senior care, addiction recovery, and cognitive care for dementia support the idea that the negative effects of human cognitive processes can be reversed to better our mental health and general wellbeing. Instead of submitting to the notion that technology will lead to everyone’s demise, AI carries the hope that technology could potentially boost Human Intelligence.[2] Data for Good is gaining momentum outside the educational and healthcare realms. Numerous Data for Good platforms have emerged in the last five years or so, thanks to the rapid acceleration of big data. A prime example is Bloomberg’s Data for Good Exchange, an annual forum that focuses on the intersection between data science and social good and where this combination could lead. Their theme for the 2017 conference, “With Great Data comes Great Responsibility”, reflects the general concept. A new consensus of universities, businesses, and political leaders who are actively supporting this idea broadens the possibilities beyond data scientists and AI innovators. [1] Fuchs, Ester R. “Smart Cities, Stupid Cities, and How Data Can Be Used to Solve Urban Policy Problems.” Tech At Bloomberg, Bloomberg Finance L.P. , Web. 28 Aug. 2017. [2] Gazzaley, Adam. “The Cognition Crisis – Future Human – Medium.” Medium, 9 July. 2018. In recent years, Machine Learning (ML) has grown beyond a simple buzzword. From Google Maps to fraud detection to Netflix, there are countless ways in which ML can be translated into solutions that positively influence and permeate our daily lives.
Here at Tensility, we often get questions surrounding ML and whether it’s different from AI. ML, an important branch of AI, can be quite complicated to understand without any background. Machine Learning: A Primer, an article recently written by Lizzie Turner on Medium covering the what, who, when, where, how, and why of ML, condenses the answers in a simple way - offering valuable insight, tangible examples, and helpful graphics that anyone can understand. ML is composed of algorithms that sift through data in order to reach a conclusion or make a prediction about something. The programmer’s role in ML is not to program the machine to complete a task but rather to teach it how to develop algorithms itself, to learn about the data, and even from its own experience. This process is composed of supervised learning, unsupervised learning, and reinforcement learning. Detailed in Turner’s article, supervised learning deals with labeled data such as sorting spam email, while unsupervised learning deals with unlabeled data often used for big data visualization. Reinforcement learning happens when the machine adapts to ideal behavior in order to maximize performance such as Google’s computer program AlphaGo. How ML works is more complicated. Drawing from mathematics (linear algebra, calculus, and statistics), algorithms types consist of regression, instance-based, decision tree, Bayesian, clustering, deep learning, neural network, and others. Of these, regression algorithms are amongst the most favored due to their fast speeds. Others, like decision tree algorithms, combine weaker learners by branching one to another to form a single stronger algorithm that can make more accurate predictions. We too believe that ML has the power to positively influence people’s lives and the way they work. In areas such as brain and cognition levels, forecasting supply and demand balance and longevity, and broader spaces like healthcare and network security, there is great potential for unique breakthroughs that could change the way in which many of these industries operate today. Start with proprietary data. For a start-up considering taking the long journey to building an enterprise software company, it is imperative to have a relevant and hard to replicate data set. There are several ways start-ups tackle this problem. The first way is to convince a Charter Customer to provide their operational data in exchange for an economic incentive for being the Charter Customer (Tensility’s 2DA Analytics did this to get started). The second way is to devise a strategy to gather and create your own data set. This is particularly useful if the process or behavior you are trying to model is not stored in any enterprise operational systems (Tensility’s Triggr Health is a good example of this path). A third way is to leverage research sponsored projects in exchange for an economic incentive (Tensility’s Health DataLink proved out their product in this fashion). Other less valuable ways to gain access to data, such as procurement of 3rd party data services or gathering publicly available sources, lead to unattractive start-ups because the primary data set can be easily replicated. Develop models to frame and validate the size of the problem. Leverage open source AI platforms to show your models are valid and provide the proper amount of predictive and/or prescriptive results. Problems requiring ML and Deep Learning platforms can utilize Google’s Tensorflow, Caffe from UC Berkeley, Microsoft’s CNTK, Tencent’s Angel, Baidu’s PaddlePaddle or MXNet by AWS/Baidu/CMU. For Natural Language Processing problems, open source projects like NLTK, TextBlob, Gensim or spaCy are good places to start. The goal of this effort is to develop a set of tested results that can be shown to knowledgeable prospects. This feedback will ensure you are headed in the right direction of solving a high value problem (Tensility’s Genivity did an outstanding job at this). Design the workflow. A well-designed AI solution will have the ability to modify the current workflow in the enterprise and thereby realize the full potential of providing new capabilities to enterprise workers. Start-ups should map out the current workflow and roles of current users, then creatively and collaboratively move to a re-engineered workflow. The new workflow is likely to start with a “human in the loop” process to help the enterprise understand how and why the AI system works better than status quo. Use good UI design to help with adoption. There will be some skeptics in the enterprise regarding using an AI solution. Good product design should identify the limitations and opportunities the enterprise has in making decisions currently. The UI design should provide visibility into data integrity, scenarios and reasonability testing by key potential users to improve adoption. AI start-ups that integrate these four elements into their product design are likely to be more successful and require less risk capital to get started. Online consumer companies, including Facebook, Amazon, Netflix, and Google (FANG), have embraced Artificial Intelligence (AI) to deliver individualized, satisfying customer experiences. As a result, FANG has gained tremendous market dominance, exhibited above average revenue growth, and been rewarded with high market capitalization. The adoption of AI by the enterprise will unlock similar results for all companies in all industries. AI in the enterprise enables faster and better decisions, promotes creativity, and increases productivity. Empowering employees to prioritize, optimize, and respond to high value work streams and processes, like customer leads, sales pipelines, security threats, and patient outcomes (to name a few), at an unprecedented scale brings significant competitive advantages to the enterprise. In effect, AI brings “super-powers” to employees to enhance their abilities at a scale that has never been possible. The learning capabilities inherent to AI, using machine learning and deep learning and other methods, will allow enterprises to quickly adapt to changing business situations. AI learning goes beyond older approaches of programmatic rules in systems, and instead uses statistical methods or algorithms to automatically detect and articulate complex patterns within data that changes over time. This approach to learning leads to applications that can predict future outcomes and thereby make recommendations. The outcome for enterprises that embrace AI will be a massive increase in productivity and wealth creation. Cloud technology and decreasing storage and computing costs make AI methods commercially viable today, therby ushering in the AI economy for investors, unlocking a $3T dollar[1]opportunity of incremental income and wealth creation. [1] Nicolaus Henke, Jacques Bughin, Michael Chui, James Manyika, Tamim Saleh, Bill Wiseman, and Guru Sethupathy. “The Age of Analytics: Competing in a Data-driven World.” McKinsey & Company. N.p., n.d. Web. 19 Apr. 2017. |
AuthorArmando and Wayne Archives
February 2021
Categories |