At Tensility, we’re constantly improving on our pattern recognition. Our ability to spot revolutionary technology innovations, market opportunities, and high-performing founding teams is shaped largely by the previous successes and failures that we’ve had with our portfolio companies. The long time horizon of the venture business makes it difficult to spot the correlations and causations in the data, which often is fuzzy and incomplete. Outcomes are influenced by so many factors that searching for statistically significant relationships is difficult.
Let’s talk about founders. Our experience over 20+ years of venture investing in technology startups has taught us what to look for in successful entrepreneurs:
The energy required of founders to both have all these skills as well as grind through building a company from scratch has historically implied at least one characteristic: youth. Anecdotal evidence appears to support this trend: Mark Zuckerberg founded Facebook when he was 19. Michael Dell started Dell Technologies at the age of 21. Paul Graham once said that “the cutoff in investors’ heads is 32… After 32, they start to be a little skeptical.”
Recent research from Ben Jones, professor of strategy at the Kellogg School of Management, however, offers surprising results:
The researchers were chiefly interested in high-growth new ventures—the kinds that can transform the economy—and understanding whether the Silicon Valley mythology was true. So they limited their dataset to include only technology companies, and further winnowed that down to the fastest-growing 0.1 percent—in other words, the one company out of every 1,000 that saw its sales or number of employees increase the most in its first five years.
45! What a disconnect from the prevailing narrative of young founders being successful. We see this narrative at play everywhere. The average age of YCombinator Winter 2018 Cohort was 29.9. The average age of TechCrunch awards recipients between 2008-2016 was 31. Etc.
Several factors may contribute to this phenomenon. Technology startups depend heavily on software developers- and the average age of software developers appears to be 28.7. The media also tends to overemphasize young entrepreneurs.
How does this new understanding of who are likely to be successful founders affect our investing?
We’ve never explicitly targeted younger founders. But, prevailing narratives can have subtle and unintentional effects on our investing decisions. Moving forward, we should all be aware of potential biases and make sure we don’t fall prey to them.
Businesses today face risk from a myriad of factors- from political upheaval that disrupts supply chains, to natural disasters which threaten people and facilities, and even to local crime which puts employees in danger. Companies can attempt to plan for these risks through enterprise risk management, but collecting actionable intelligence in time to mitigate risk continues to be a difficult task.
Stabilitas, based in Seattle, is the leading provider of Critical Event Intelligence that combs the world’s data to correlate critical events with key enterprise assets. The Stabilitas platform integrates the industry’s widest array of global data sources, including unstructured data like text-based news sources as well as structured data sources like earthquake feeds from the USGS. Stabilitas’ key innovation is using multiple machine-learning techniques to synthesize this data from disparate structured and unstructured sources into discrete critical events. Stabilitas works with customers to develop a full picture of the customer’s assets, however they may be distributed around the globe, and surfaces only that critical intelligence which is relevant to the customer, filtering out the noise and amplifying the intelligence signal. Unique in the industry, Stabilitas also offers API connectivity that allows seamless integration with a customer’s existing security operations system.
Tensility is pleased to announce an investment in Stabilitas, which will enable Stabilitas to scale their customer base and expand into new markets. We’re excited to work with the founders, Greg Adams and Chris Hurst. Greg and Chris are both experienced military veterans and graduates of Harvard Business School, and they each bring unique and relevant experience in enterprise risk to Stabilitas. As an Army Green Beret, Greg saw firsthand the value of actionable critical intelligence in keeping his troops safe. Chris managed enterprise risk both in the Army and as while working at CH2M Hill on government contracts.
Stabilitas currently partners with large enterprises like Amazon and Procter & Gamble to provide timely and actionable critical event intelligence. Stabilitas also has a valuable partnership with G4S, the world’s largest security company. We’re excited to see the Stabilitas team move forward in addressing the risks inherent in doing business in today’s unpredictable world, and are confident that critical event intelligence provided by Stabilitas will be a invaluable resource for for all enterprises.
Many enterprises are deploying artificial intelligence (AI) and business intelligence (BI) solutions and nearly all of them continue to struggle with accessing their large, disparate data stores. For each AI or BI application, the enterprise must create and manage a set of policies and procedures to ensure data privacy and security is in place because most applications copy, replicate and propagate the data. This creates an increasing data governance challenge as a business becomes more digital and intelligent through the use of analytical applications.
Molecula addresses both the performance challenges of accessing large, disparate datasets and simplifies data governance. We’re excited to announce our investment in Molecula’s $6M seed round. Molecula is a Data Virtualization platform that helps enterprises make their data ready for AI. Molecula offers a Virtual Data Source (VDS) that gives users a containerized view of their data, and builds upon their open-source Pilosa distributed bitmap index technology. Molecula enables blazingly fast querying over traditional methods and simplified data security and governance because the data is not replicated as is common in today’s BI and AI applications. With Molecula, companies can drive faster decision cycles for business users as well as simplify data governance compliance. Molecula’s VDS Management System (VDSMS) facilitates the creation and administration of multiple VDS instances, reducing the complexity of managing data infrastructure to just a few lines of code and allowing users to clone, move, manage access to, and apply plugins to VDS instances.
Molecula was founded by H.O. Maycotte in Austin, Texas. The company has a strong leadership team and is already partnering with major firms such as Oracle. We’re looking forward to working with H.O. and the team at Molecula to help make enterprises more ready for artificial intelligence!
In today’s data-driven economy, we increasingly need to discover new methods to deal with substantial data and information. One company that will transform how information overload is managed is New York-based Agolo, a leading summarization platform for enterprise. Agolo’s AI engines can analyze thousands of documents daily and generate human-readable summaries in real-time. As mentioned in the “Agolo attracts Microsoft and Google funding with AI-powered summarization tools” article by TechCrunch, Agolo helps automate the process of ‘summarization’, pulling in The Associated Press, a pioneering and huge news organization, as a flagship client. Agolo is able to summarize quickly and accurately, producing an AI-powered summary that is broadcast and enterprise-quality.
Tensility is elated to announce that we are leading a $3 MM investment in Agolo, the leader in enterprise scale summarization of textual data. We are enthusiastic to work with Agolo, and our co-investors Microsoft and Google, to enable users to consume large volumes of data and information more efficiently to spend more time on higher-value business activities. Agolo fights information overload through AI-powered summarizations. Agolo has assembled the largest dataset of human-written summaries in existence to power its neural network training. Using Natural Language Processing (NLP), Agolo’s technology analyzes content, identifies different subjects, and draws connection between them.
Media companies use the product to deliver personalized summaries of topics interested to users. Voice assistant platforms can generate voice content to deploy hyper localized, listenable summaries for their customers. Similarly, financial advisors deliver news summaries customized to the stocks in clients’ portfolios. Moreover, enterprises can use Agolo to generate search-based summarization across a variety of documents in their data lake.
Agolo was founded by Sage Wohns and Mohamed Al Tantawy. We are confident their team of NLP and software engineers has the business and academic experience to tackle this complex information overload problem. Tensility is impressed by their level of expertise and we are excited to work with Agolo to address information overload.
Tensility is thrilled to announce our investment in Aegis Systems’ seed round of $2MM. We at Tensility are passionate about cyber security and making any public space a safer place. We are concerned about gun threats to innocent people. According to an FBI study of Active Shooter Incidents in the United States Between 2000 and 2013, around 60% of the shootings end before the police arrive. Unfortunately, law enforcement often receives delayed and inaccurate information. The Aegis technology can detect a gun before it is fired, unlike other technologies that detect the sound of a gunshot after being fired. We believe the Aegis software save precious minutes and has the potential to reduce the causalities by detecting firearms, providing early warnings, and improving law enforcement response.
Aegis Systems builds computer vision software using powerful neural network techniques to turn any security camera into a gun-detecting smart camera, enabling real-time response to gun violence. This system can be described as a threat detection technology. The AI software scans thousands of video feeds simultaneously and alerts building security upon a successful detection of a gun. The technology detects adjacent threats, such as intruders, left objects, and vehicles, enabling end-to-end management security without requiring additional hardware or security personnel.
Aegis was founded by Sonny Tai and Ben Ziomek. Sonny was a US Marine Corps officer, a strategy consultant, and has a deep passion to address gun violence. Ben worked at Microsoft leading engineering and data science teams to leverage Artificial Intelligence and will bring the needed product expertise. Tensility is excited to partner with Aegis Systems to drive the company forward and reduce gun violence.
Tensility is thrilled to announce that one of our Tensility Fund II portfolio companies, Health Data Link, is being acquired by Datavant, a company on a mission to connect the world’s healthcare data. This acquisition will further Health Data Link’s mission of building a reliable data sharing ecosystem to support medical research that ultimately improves patient outcomes.
Privacy laws and fragmented data make sharing of customer healthcare data between enterprises very difficult. Health Data Link (HDL) developed a unique solution to this problem, using an AI hashing function to anonymize the data, allowing it to be linkable to other sources. This makes customer data completely anonymous and gives enterprises the ability to connect data from disparate databases. It significantly cuts down on the time needed to share data, since the data is anonymized and does not have to go through time-consuming compliance processes. Through this process, researchers can perform segmentation and analyses on these datasets and drive critical breakthroughs.
HDL’s customers validated the usefulness of the product for population health studies and health economics studies. Over 40 institutions used HDL’s solution, connecting data from over 10 million patients. A leading research network, REACHnet, based in New Orleans used HDL to link data across health care institutions and payers. PRACnet, a patient-centered health plan research network collaborated with REACHnet for an antibiotic study and HDL’s technology “enabled a more accurate understanding of the effects of antibiotic utilization while helping organizations protect patient privacy.”
We were also impressed with the expertise of the cofounders – Satyender, Abel and Jasmin - in the fields of healthcare analytics and research. It has been an absolute pleasure working with the founders and the CEO, Jacob Plummer, for the past year. We are confident in their ability to help Datavant accomplish their mission of connecting the world’s health data to improve patient outcomes and we are excited to watch their progress in solving this important problem. We continue to be big believers in healthcare data and are excited to make future investments in the space.
More information about the acquisition can be found in Datavant’s Press Release here.
As many companies have realized the commercial potential of healthcare data that has remained idle within healthcare system databases, AI technology in the healthcare space has begun to expand. Patient contains valuable information that could bring drastic improvements to diagnosis, care, and operations.
Integrated healthcare systems have been collecting and storing patient data since they came into existence. Sanford Health, a large non-profit rural integrated health system has been featured on Harvard Business Review. Their massive amount of data spans across a variety of categories including: admission diagnostic, treatment, and discharge data to online activity between patients and providers. They recognize the potential of their robust database and have begun working with higher-level institutions to find ways to improve the quality and reduce the costs of healthcare. In a win-win situation, health care system databases can be leveraged and utilized effectively, and partnering institutions can use previously inaccessible data to conduct further research that may bring positive impact to healthcare.
Within the cognitive health realm, the Data For Good movement has changed perception on the usage of technology. With the aging population in countries around the world, AI technology (like predictive analysis), has increasingly shown positive results that can slow down dementia and reduce the negative impact of mild traumatic brain injuries with earlier and better diagnosis than traditional tests that have been in place. BrainCheck, a cognitive assessment and management device, is a good example. By taking a holistic view of cognitive functions over a whole lifetime, cognitive assessment and care can have value for concussions in teenagers, substance usage in adults, and dementia in the elderly.
Up until recently, technology adoption has been slow within the healthcare space. If the mountains of data coming from large health care systems and new technology fail to converge through a mutual collaboration, groundbreaking improvements and discoveries may never come to light. Tensility’s Health Data Link offers a solution to this collaboration problem by establishing private data linkages based on individual data governance and data linkage needs.
With these obstacles in mind, there’s a need for the Data For Good movement to push past any privatization of data and technology in order to encourage powerful institutions and technology to work together to improve healthcare outcomes and reduce costs.
 Hsu, Benson S., and MD Emily Griese. “Making Better Use of Health Care Data.” Harvard Business Review, Harvard Business School Publishing, 10 Apr. 2018.
 Marshall, Phil, and Conversa Health. “Health Care Bots Are Only as Good as the Data and Doctors They Learn From.” VentureBeat, VentureBeat, 22 June 2018.
Data for Good is a recent development in the evolution of data science. Although this term has been around for less than a decade, the idea of social responsibility and using creative methods to combat social challenges have been around for much longer. At its core, Data for Good is the idea of using data responsibly in order to solve societal issues across a variety of industries for the betterment of the world.
Not limited to just one industry, Data for Good is applicable to a wide span of areas including public health, poverty, social justice, environment, etc. A recent initiative within this space is the smart city movement. Even though not all cities have the ability or access to the necessary data, the overall concept is embodied by the usage of data to improve public services. The partnership between New York City, Columbia University, and Data Science Institute to reduce floatable trash highlights the positive impact made by using data for social good. Data for Good has also been recognized by other institutions of higher-learning: the University of Chicago offers a Data Science for Social Good Summer Fellowship to encourage and train data scientists to use their skills to take on projects with social impact in areas such as transportation, economic development, and international development.
The ongoing rise in cognitive issues has sparked a philosophy that technology can be used to slow down this acceleration. Already visible within the healthcare realm, new developments in data science and AI for personalized medicine, senior care, addiction recovery, and cognitive care for dementia support the idea that the negative effects of human cognitive processes can be reversed to better our mental health and general wellbeing. Instead of submitting to the notion that technology will lead to everyone’s demise, AI carries the hope that technology could potentially boost Human Intelligence.
Data for Good is gaining momentum outside the educational and healthcare realms. Numerous Data for Good platforms have emerged in the last five years or so, thanks to the rapid acceleration of big data. A prime example is Bloomberg’s Data for Good Exchange, an annual forum that focuses on the intersection between data science and social good and where this combination could lead. Their theme for the 2017 conference, “With Great Data comes Great Responsibility”, reflects the general concept. A new consensus of universities, businesses, and political leaders who are actively supporting this idea broadens the possibilities beyond data scientists and AI innovators.
 Fuchs, Ester R. “Smart Cities, Stupid Cities, and How Data Can Be Used to Solve Urban Policy Problems.” Tech At Bloomberg, Bloomberg Finance L.P. , Web. 28 Aug. 2017.
 Gazzaley, Adam. “The Cognition Crisis – Future Human – Medium.” Medium, 9 July. 2018.
In recent years, Machine Learning (ML) has grown beyond a simple buzzword. From Google Maps to fraud detection to Netflix, there are countless ways in which ML can be translated into solutions that positively influence and permeate our daily lives.
Here at Tensility, we often get questions surrounding ML and whether it’s different from AI. ML, an important branch of AI, can be quite complicated to understand without any background. Machine Learning: A Primer, an article recently written by Lizzie Turner on Medium covering the what, who, when, where, how, and why of ML, condenses the answers in a simple way - offering valuable insight, tangible examples, and helpful graphics that anyone can understand.
ML is composed of algorithms that sift through data in order to reach a conclusion or make a prediction about something. The programmer’s role in ML is not to program the machine to complete a task but rather to teach it how to develop algorithms itself, to learn about the data, and even from its own experience. This process is composed of supervised learning, unsupervised learning, and reinforcement learning. Detailed in Turner’s article, supervised learning deals with labeled data such as sorting spam email, while unsupervised learning deals with unlabeled data often used for big data visualization. Reinforcement learning happens when the machine adapts to ideal behavior in order to maximize performance such as Google’s computer program AlphaGo.
How ML works is more complicated. Drawing from mathematics (linear algebra, calculus, and statistics), algorithms types consist of regression, instance-based, decision tree, Bayesian, clustering, deep learning, neural network, and others. Of these, regression algorithms are amongst the most favored due to their fast speeds. Others, like decision tree algorithms, combine weaker learners by branching one to another to form a single stronger algorithm that can make more accurate predictions.
We too believe that ML has the power to positively influence people’s lives and the way they work. In areas such as brain and cognition levels, forecasting supply and demand balance and longevity, and broader spaces like healthcare and network security, there is great potential for unique breakthroughs that could change the way in which many of these industries operate today.
Start with proprietary data. For a start-up considering taking the long journey to building an enterprise software company, it is imperative to have a relevant and hard to replicate data set. There are several ways start-ups tackle this problem. The first way is to convince a Charter Customer to provide their operational data in exchange for an economic incentive for being the Charter Customer (Tensility’s 2DA Analytics did this to get started). The second way is to devise a strategy to gather and create your own data set. This is particularly useful if the process or behavior you are trying to model is not stored in any enterprise operational systems (Tensility’s Triggr Health is a good example of this path). A third way is to leverage research sponsored projects in exchange for an economic incentive (Tensility’s Health DataLink proved out their product in this fashion). Other less valuable ways to gain access to data, such as procurement of 3rd party data services or gathering publicly available sources, lead to unattractive start-ups because the primary data set can be easily replicated.
Develop models to frame and validate the size of the problem. Leverage open source AI platforms to show your models are valid and provide the proper amount of predictive and/or prescriptive results. Problems requiring ML and Deep Learning platforms can utilize Google’s Tensorflow, Caffe from UC Berkeley, Microsoft’s CNTK, Tencent’s Angel, Baidu’s PaddlePaddle or MXNet by AWS/Baidu/CMU. For Natural Language Processing problems, open source projects like NLTK, TextBlob, Gensim or spaCy are good places to start. The goal of this effort is to develop a set of tested results that can be shown to knowledgeable prospects. This feedback will ensure you are headed in the right direction of solving a high value problem (Tensility’s Genivity did an outstanding job at this).
Design the workflow. A well-designed AI solution will have the ability to modify the current workflow in the enterprise and thereby realize the full potential of providing new capabilities to enterprise workers. Start-ups should map out the current workflow and roles of current users, then creatively and collaboratively move to a re-engineered workflow. The new workflow is likely to start with a “human in the loop” process to help the enterprise understand how and why the AI system works better than status quo.
Use good UI design to help with adoption. There will be some skeptics in the enterprise regarding using an AI solution. Good product design should identify the limitations and opportunities the enterprise has in making decisions currently. The UI design should provide visibility into data integrity, scenarios and reasonability testing by key potential users to improve adoption.
AI start-ups that integrate these four elements into their product design are likely to be more successful and require less risk capital to get started.