INFORMATICA CORP INFA
September 12, 2013 - 5:45pm EST by
tumnus960
2013 2014
Price: 38.35 EPS $1.38 $1.60
Shares Out. (in M): 111 P/E 24.2x 20.8x
Market Cap (in $M): 4,268 P/FCF 21.2x 18.6x
Net Debt (in $M): -604 EBIT 220 268
TEV (in $M): 3,664 TEV/EBIT 16.6x 13.7x

Sign up for free guest access to view investment idea with a 45 days delay.

  • Software
  • margin expansion

Description

Introduction

INFA is a uniquely positioned software company that has grown organically for ten consecutive years and generates a superb Return on Net Total Capital.  (RONTC was 28.8% in 2012 even though 2012 was a difficult year for the business.)  While the stock’s valuation may not look compelling at first glance, it is towards the bottom of its historic multiple range, and I will show why its return should be at least respectable form today’s price. 

INFA is my first foray into infrastructure software.  I thus found the learning curve for this story to be particularly steep because it required learning an entirely new technical vocabulary.  I ended up writing a six page glossary as part of this effort, and I have included that glossary at the end of my report to expedite readers’ progress up this curve. 

 

Background

Computer programs such as ERP systems and CRM systems are usually built around the front-line workers who will be their primary users.  The data contained within these systems, however, is often needed by other parts of the organization whether they be adjacent departments who use different, but related computer systems or management who is attempting to solve problems, understand trends, and plan for the future.  Businesses’ data is thus scattered among many computer systems, a phenomenon that is referred to as “Data Fragmentation.”  Furthermore, the overriding trend is for data fragmentation to become worse over time even though some forces occasionally reign it in. 

The primary cause of data fragmentation is users’ understandable desire to adopt the software tools that best suit the needs of their immediate departments.  This favors best-of-breed solutions over integrated solutions.  (Integrated solutions are much easier for IT departments to support because the various modules are designed to work seamlessly with each other.  These types of solutions are thus ideal for organizations with limited IT resources such as small businesses, municipal governments, and non-profits.)  Another cause of data fragmentation is the general reluctance to simply replace legacy systems with newer systems.  This results in multiple generations of IT hardware and software that run side-by-side and sometimes redundant systems that run in parallel to each other.  Having personally led a “forklift upgrade” of a core IT system, I can readily attest to the pain, operational problems, short-term costs, and risks associated with migrating to a completely new computer system.  To be sure, companies do eventually migrate their systems, but the pace of this is usually glacial, and this tends to result in environments that have been cobbled together over time instead of installed in one unified replacement cycle.  A more recent cause of data fragmentation has been the adoption of cloud computing.  This has resulted in hybrid IT environments where data is not only scattered among multiple systems behind a company’s firewall, but it is also scattered among multiple systems outside of the company’s firewall.  An even newer source of data fragmentation comes from new data sources such as social networking and device data (i.e. the surprising amount of data being collected by smart phones and similar devices).  These are relatively novel types of data being generated by third parties, and they are usually unstructured. 

While data is created and stored in disparate systems, it often needs to be shared among those systems, and this is accomplished through digital pipes called “Integrations.”  Unlike physical pipes, however, the data often undergoes various transformations as it passes through the pipe in order to protect the data (i.e. encryption) or change its form so that it will be usable in the target system.  These integrations are a form of software called Middleware, and like all other software, they have to be developed, tested, maintained and sometimes audited.  For example, if the either source system or target system are updated, the middleware between them might also have to be updated.  I spoke to a systems integrator who explained that middleware often has a “step-child-like” status.  This is because the developers are primarily focused on the applications because it is easier to appreciate how those will improve the performance of the workers who will be using them.  Middleware, by contrast, resides in between computer systems and doesn’t impact end users directly.  Importantly, most integrations are hand coded which means that they have to be developed, tested, and maintained manually, and this naturally becomes harder to do over time as data becomes more fragmented and the types of data and data processing systems become more diverse.

 

Informatica’s Solution

INFA’s flagship product is called PowerCenter, and it provides an automated tool that developers can use to create integrations within a visual development environment.  Instead of coding integrations by hand, developers can use PowerCenter to drag and drop icons that represent the data sources and data targets and then define whatever calculations are required to transform the data as it passes through the pipe.  INFA thus offers an automated means through which to create the pipes instead of building the pipes by hand.

A key part of INFA’s value proposition is that INFA updates the integrations that users have created through PowerCenter whenever the source or target systems are updated.  INFA’s integrations are thus much easier to maintain, and they are also much easier to audit if needed.  INFA has been supporting some of the source systems for as long as twenty years and is thus very familiar with their various generations.  While it is technologically straightforward to code a single integration from Point A to Point B when A and B are already defined, it is incredibly complex to develop a platform that can integrate any given Point A to any given Point B regardless of how they are defined.  INFA has many small competitors who provide specific integrations, but INFA’s platform is distinguished by the fact that it can create integrations between over 500,000 systems.  This is particularly appealing to users who want to future-proof their IT environments since INFA allows them to quickly modify their pipes whenever a given program is replaced or modified. 

INFA’s primary competition is hand-coding, so they don’t face commercial competitors in the majority of their deals.  Hand-coding has remained the predominant approach for a variety of reasons.  The first is that developers are fluent in writing code, so hand coding is often the most expedient solution even if it will be more costly over the long-term.  The second is that the decision about how to create the integrations is usually made by a team leader who likely has a preference to use the developer resources that he already has instead of requesting funds to purchase an off-the-shelf integration solution.  A third dynamic is that developers see integrations as a source of job security since they may be the only ones capable of maintaining the integrations over time.  Lastly, while I was not able to confirm this with INFA, I suspect that third party systems integrators have a huge financial incentive to hand code integrations.  The enterprise software market is roughly $40BB annually, but only $8BB of that is spent on the software itself--$32BB is spent on services installing and maintaining the software.  Systems Integration contracts are usually structured as Time & Materials contracts, and this encourages those vendors to create solutions by hand.   

A variety of factors, however, are making hand-coding a less tenable approach over time.  The first is the inexorable trend of data fragmentation which requires developers to acquire more and more skills as types of systems and data proliferate.  This encourages them to adopt INFA’s platform.  Another factor was the Great Recession which led CIO’s to examine their IT organizations on a more granular level than they had in the past in an attempt to improve efficiency.  As CIO’s looked for automation opportunities, Informatica’s products often rose to the surface, and this created a sea-change in INFA’s ability to present to high level executives.  Prior to 2007, INFA’s CEO, Sohaib Abbasi, was often unable to secure meetings with CIO’s, but he is now readily able to get such meetings.  Another, though less discussed dynamic encouraging the adoption of INFA’s products is the fact that corporate IT customers are increasingly asking their systems integrators to bear some of their project’s risks, primarily through fixed priced contracts.  (Interestingly, this trend seems to be secondary consequence of cloud computing which is showing organizations the benefits of pre-packaged, fixed-price solutions.)  INFA is thus receiving more interest from systems integrators since INFA’s platform offers a straightforward and predictable means by which to integrate various systems.

One of the most attractive features of INFA’s franchise is that they face commercial competitors in less than half of their deals, and when they do face Tier 1 competitors, Informatica wins over 75% of the time.  INFA competes with IBM in 20-25% of their deals, but IBM’s solution (“Essential”) is typically sold as part of a package sale with other IBM products.  INFA competes with ORCL, SAP, and MSFT relatively infrequently.  Interestingly, ORCL offers its own Data Integration product, but ORCL is also INFA’s #1 OEM customer because ORCL’s own customers want a Data Integration solution that can work with whatever systems they currently use or may chose in the future.  This gets at another critical facet of the INFA franchise: INFA’s leadership position is structural.  Customers prefer using an independent Data Integration solution because they want complete freedom for deciding how to build their computing environments.  For example, they don’t want to be biased toward ORCL’s other products because they are already using ORCL’s Data Integration product.  INFA’s leadership position is further reinforced by the application software companies themselves.  These firms are competing with each other which limits their willingness to share the technical information required to pre-build integrations between their products. 

Over the last several years, INFA has extended their product line into adjacent areas through acquisitions and R&D.  For example, data is only useful if it is trustworthy so INFA offers a Data Quality product to ensure that data is reliable.  The company has also moved into Master Data Management (MDM) and Information Lifecycle Management (ILM).  These complimentary product lines are both discussed in the glossary.  These adjacent product areas are relatively new for INFA, and only 43% of INFA’s active projects during 2Q13 included even one of INFA’s new products.  Management thus believes that cross-selling these products into new and existing customers is INFA’s largest growth opportunity, though there is also ample opportunity to grow revenue from their core products. 

 

Recent Disappointments

As noted previously and shown in the financial tables at the end of this report, INFA realized phenomenal revenue revenue growth from 2003 through 2011, even growing organically throughout the Great Recession.  While INFA’s results during 2012 were strong on a relative basis, they were very disappointing compared to the rapid revenue growth and margin expansion of previous years.  These problems arose from very discrete sources that the company has made significant progress resolving, but questions on their most recent earnings call, along with the stock’s multiple suggest that the company still faces some skepticism about their ability to resume double digit revenue growth and healthy margin expansion. 

Prior to 2012, INFA had two overlaid salesforces.  INFA’s account managers sold the company’s core products, and an independent, specialized team sold INFA’s newer products such as MDM and ILM.  This approach led to competition between these two sales teams.  Moving into 2012, management thus decided to merge these teams, but they made numerous mistakes in this reorganization.  The first was the classic problem that companies face when their salespeople know they will soon be reassigned to a new territory: they stop diligently cultivating their sales pipeline because they know that they won’t be the ones to benefit from those efforts.  So while INFA’s sales pipelines looked fine superficially moving into 2012, their quality had actually deteriorated significantly.  The consequences of this were amplified by the fact that the territory changes were extensive.  A second miscalculation came from the fact that both of the original salesforces overestimated their familiarity with each other’s products.  They were thus poorly positioned when they began attempting to sell new products into new accounts.  These changes were a step in the right direction, but this new structure still wasn’t quite what the company needed, and the shift was also poorly executed.  These problems surfaced in 2Q12 when license revenue abruptly declined 17.8% after having grown consistently for the prior eleven quarters. 

In some ways, these changes reflected some basic growing pains as INFA transitioned into a much larger, more complex company whose sales effort was beginning to require a deep understanding of its customers’ needs and IT environments—especially when selling their newer products.  Paul Hoffman joined INFA as their Executive of Worldwide Sales in 2005, and during the 2012 Analyst Day, he walked through how INFA’s sales organization had changed radically over the prior seven years.  When Mr. Hoffman joined, INFA generated all of its roughly $200MM in revenues by selling one product (ETL) into one vertical market (Data Warehousing).  They did this solely through territory managers which made sense given their limited product line and limited sales resources.  By 2011, however, INFA’s revenue had tripled, and the company had gained several complex, but complimentary new products which offered them the opportunity to cross-sell extensively into their customer base.  Pursuing this opportunity, however, required a much more sophisticated and coordinated sales effort.  While INFA did not mention this publically, by 2012, the sales organization had grown to a point beyond what Mr. Hoffman was interested in managing, and the company had already begun to search for his successor.  The problems that surfaced in 2Q12 accelerated this search, and the company installed John McGee in this role in July, 2012.  Mr. McGee implemented a systematic “sales cadence” in which the salespeople have deliverables every week, including updates about opportunities for future quarters.  This is resulting in better visibility, a better qualified pipeline, and earlier notification of when INFA needs to change course to better pursue an upcoming opportunity. 

Another problem that surfaced during 2Q13 was trouble within INFA’s European organization.  This was primarily due to turnover in the region’s leadership, especially their regional sales manager.  While the current European Sales Manager is the fourth one in recent memory, management’s commentary over the last two quarters suggest that this region finally has a permanent management team and is successfully implementing the changes that have proven effective within the American sales organization. 

INFA thus appears to be well on its way to building a sales organization that can realize its larger market opportunity.  Ongoing efforts in this area, however, as well as incremental investments in R&D are depressing operating margins, and management indicated that margins will not return to their prior peak until sometime after 2014.  I believe this testifies to the size and longevity of INFA’s market opportunity, though it is admittedly weighing on profitability in the interim.

 

Market Opportunity

Numerous high-level and discrete trends should allow INFA to return to double digit revenue growth.  The first of these is the growing complexity of IT infrastructures which is making hand coding progressively less tenable and encouraging the adoption of INFA’s automated tool.  As noted previously, the Great Recession and the emerging trend toward fixed-price systems integration contracts should augment this general shift.

One of the attractive facets of the INFA story is that their product is flexible and can thus address a wide array of customer needs (a.k.a. “use cases”).  Below are some examples of use cases that were given on the 2Q13 CC:

  • One European customer plans to use PowerCenter to help them load data from SAP into Teradata to ultimately help them improve their decision making.  This same customer is using INFA to synchronize data between SAP and Salesforce.com to improve their operational efficiency.
  • Financial Service firms are using INFA to meet compliance requirements and also to become more customer-centric organizations. 
  • In the US, the Affordable Care Act is placing emphasis on cost-effective quality care which is creating new opportunities for INFA, including helping some states develop their insurance exchanges.
  • INFA is beginning to see some successful deployments of PowerCenter, Big Data Edition.  They also noted that a large financial institution benchmarked this product against other alternatives including hand coding and determined that PowerCenter, Big Data Edition was twice as fast as any other solution. 
  • One of the largest title lending companies in the US standardized on almost the entire INFA portfolio in order to increase profitability through trustworthy, holistic and authoritative data on customers and market segments.
  • One of the largest healthcare resource groups in the US standardized on Informatica Cloud to integrate Salesforce.com with data in other cloud services such as Workday and also data in on-premise applications such as PeopleSoft and custom SQL server-based applications.  In this example, Informatica Cloud will replace four other products.  That same customer previously adopted Informatica MDM to gain a single view of patients and doctors across five different Salesforce.com organizations which corresponded to five different service lines. 
  • A California community-based healthcare provider selected Informatica ILM to retire over 150 different applications by using ILM to archive the data and ensure the privacy of sensitive information through data masking. 
  • MD Anderson, the nation’s leading cancer hospital, adopted MDM to comply with a regulatory mandate that required them to convert to new healthcare codes.  MD Anderson will also be using MDM to manage master data from past clinical trials across multiple areas of cancer research.

INFA should also benefit from a trend towards performing more business analytics.  While Business Intelligence (BI) has been around for a long time, the original BI systems were run by a handful of extremely smart people who understood how to use them and would run reports to be disseminated throughout the organization.  The newer BI systems, however, have evolved so that they are easier to use and produce more useful information, both of which are increasing adoption.  (I suspect that lower cost computing power and data storage are further improving BI’s value proposition.)  Demand for BI systems is also increasing as more constituents become familiar with what data is available and how to use it.  Lastly, businesses are becoming more familiar with how to customize the presentation of data and distribute it to various constituents on an ongoing basis through tools such as “Dashboards.” 

Big Data represents an advanced form of BI that early adopters are currently experimenting with.  While Big Data is often referred to as a distinct market opportunity, I consider it to be an extension of this broader trend of conducting more analytics.  This benefits INFA because they provide the pipes through which to quickly connect all of the source systems to the analytic systems.  INFA is well positioned to serve Big Data initiatives for two reasons.  The first is that there there are six different types of new analytic platforms that can be used to conduct Big Data analyses, and users are just beginning to determine which platforms are best suited to address various needs.  PowerCenter, Big Data Edition is thus particularly attractive for creating the related integrations because it allows users to map the data once and then quickly redeploy the integration to a new analytic platform if needed (See “Data Mapping” in the glossary).  A second advantage PowerCenter offers with Big Data projects applies specifically to one of the six new types of analytic platforms that is called Hadoop (See “Apache Hadoop” in the glossary).  Hadoop allows users to harness thousands of computer processors to analyze a vast amount of data, but creating integrations for it requires learning its programing language, MapReduce.  MapReduce is a relatively new programming language, so there isn’t a wide set of developers who are already fluent with it.  PowerCenter, Big Data Edition allows users to sidestep the need to learn this language by simply creating the integration within PowerCenter which, by contrast, already has a large population of experienced users. 

A longer-term growth opportunity could come from a recently introduced iteration of INFA’s core technology called “Vibe.”  When INFA started about 20 years ago, their goal was to abstract away the details of the underlying processing environment from what developers were trying to do with the data.  The goal of abstracting details like the OS, HW platform and Database environment was to make it easier to create an integration and ensure that this integration would be future-proof so that if the processing environment changed, the integration could easily adapt.  This concept of abstracting away the details so that developers could easily access and use data has grown more compelling as computing environments have grown radically more complex over the last two decades. 

Vibe is a new embodiment of INFA’s core technology that allows it to operate as a standalone “Virtual Data Machine” (VDM).  INFA is attempting to do for data what Java did for applications.  The Java Virtual Machine was a virtual computer that could run any program coded in Java on a multitude of host platforms.  This allowed developers to “Write Once, Run Anywhere.”  They could write their programs in Java and then deploy those programs on a wide variety of host computers because the Java Virtual Machine was bridging the gap between the Java code and the host environment.  Programmers thus didn’t need to understand the host environment or modify their code to run in different host environments.  Vibe unifies INFA’s core technologies and allows them to be deployed on any platform in a virtualized manner.  Developers will thus be able to “Map Once, Deploy Anywhere.” 

Vibe is designed to simplify the data infrastructure in two ways:

  • Vibe provides near universal access to data regardless of the data’s physical location (i.e. on premise or in the cloud) and what type of system the data resides in (i.e. mainframe data files, SQL relational DB’s, non-SQL DB’s, or Hadoop Distributed File System (HDFS)). 
  • Vibe provides a Lingua Franca specification for the DI processing, again regardless of the data’s physical location or the computing system that it is running on. 

So Vibe can access and process any kind of data in any kind of system, hence the description, “Virtual Data Machine.”  Vibe aims to help developers by reducing the number of skills they need to master and to help IT staff by simplifying the infrastructure that they have to manage. 

In the medium-term, INFA will promote a “Vibe Inside” Software Development Kit (SDK) that programmers can embed within their applications.  This SDK represents an attempt by INFA and industry partners to promote a modern, standards-based data infrastructure for the next generation of data centric applications.  Over the long-term, INFA plans to adapt Vibe for the industrial internet which is quite relevant since the industrial internet is predicated on being able to harness a wide variety of machine interaction data in real-time. 

For more information about Vibe, please refer to the following two articles:

http://www.businessweek.com/articles/2013-06-05/informatica-wants-to-be-a-big-data-overlord

http://www.information-management.com/blogs/the-right-vibe-for-information-management-10024513-1.html

It is obviously too early to determine whether INFA will successfully position Vibe as a ubiquitous software component that developers use to unify a multitude of computing systems and devices, but I believe that INFA is the best positioned company to attempt this.  During their 2013 analyst day, management noted that re-packaging INFA’s technology as an embeddable “Virtual Data Machine” is leading customers to quickly recognize additional ways that they can use INFA’s technology beyond the use cases that they considered when this technology was only offered as packaged software.  So while Vibe is still in its early stages, it appears to meet a real need. 

 

Prospects for INFA Stock

INFA has unfortunately inched up since I began writing my report, but the price is still at a good entry point.  At $38.35, using 2014 estimates, INFA is trading at:

  • Net P / Adj. E = 20.8x
  • Net P / FCF = 18.6x
  • EV / EBITDA =13.5x

As shown in the tables below, this is around the middle of its historic P / Adj. E range, the middle-to-lower part of its historic P / FCF range, and the lower portion of its historic EV / EBITDA range.  When viewing these figures, it is important to remember that R&D and S&M investments will continue to weigh on margins in 2014, so 2014’s earnings are below INFA’s potential. 

    2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013E
                           
Net P/Adj. E                          
High   (1,749.4) 69.6 72.5 25.7 27.4 24.2 23.4 27.1 37.5 40.8 38.2  
Average   (680.4) 38.4 36.2 17.7 21.4 19.1 18.5 17.2 24.5 30.7 25.4  
Low   24.1 23.9 18.7 10.7 16.7 15.1 11.1 9.9 16.5 21.8 14.5  
Current                         24.2
                           
Net P/FCF                          
High   (578.9) 51.1 68.8 57.0 29.8 27.3 25.9 28.1 39.9 42.2 40.6  
Average   (225.1) 28.2 34.4 39.3 23.2 21.5 20.4 17.8 26.1 31.7 27.0  
Low   8.0 17.5 17.7 23.7 18.1 17.1 12.2 10.3 17.5 22.5 15.4  
Current                         21.2
                           
Net EV/EBITDA                          
High   687.3 48.3 37.2 43.5 32.0 27.6 17.2 21.1 28.8 30.5 28.9  
Average   267.3 26.6 18.6 30.0 24.9 21.8 13.6 13.4 18.8 22.9 19.2  
Low   (9.5) 16.6 9.6 18.1 19.5 17.3 8.1 7.7 12.7 16.3 11.0  
Current                         16.3

 

My long-term forecast is for INFA to grow revenues at 11-13% annually, though figures provided during their 2013 analyst day as well as their historical growth rate suggest that 11% is likely to be more of a “base case” scenario.  My forecast assumes that margins resume their climb in 2014, but at a more gradual pace than prior years.  I’m forecasting for INFA to regain their prior-peak OM in 2015 or 2016.  I am also forecasting 3% annual growth in INFA’s diluted share count because they have issued an unusually high number of stock options.  The company, however, does generate abundant FCF, so repurchases could make this dilution less than I’ve forecasted.  Altogether, these assumptions yield 2017 Adj. EPS estimates of $2.21 to $2.55.  Using exit multiples of 20.0x and 22.0x, respectively suggests exit prices between $53.86 to $66.24 after you add in cash per share which should nearly double from today’s level.  Annualized over four years, this implies annual returns of 8.9% to 14.6%.  (You could arguably annualize this over 3.5 years which would imply annual returns of 10.2%-17.0%.)  I believe there is upside to both my conservative and aggressive scenarios.

Historic financial results are shown below:

Informatica                        
Fiscal Year End: December 31                        
In   Millions, Except for Percentages, per Share Amounts and Supplemental Metrics            
                         
    2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012
                         
License Revenue   99.9 94.6 97.9 120.2 146.1 175.3 195.8 214.3 295.1 353.7 321.0
Subscription Revenue                        
Maintenance Revenue   53.9 75.7 87.5 103.6 125.0 151.2 186.2 215.3 255.4 314.0 360.8
Consulting, Education & Other Rev.   41.6 35.2 34.3 43.7 53.6 64.7 73.7 71.1 99.5 116.1 129.8
Total Revenue   195.4 205.5 219.7 267.4 324.6 391.3 455.7 500.7 650.1 783.8 811.6
                         
% Change (Yr./Yr.)                        
License Revenue     -5.4% 3.5% 22.7% 21.6% 20.0% 11.7% 9.5% 37.7% 19.8% -9.2%
Subscription Revenue                        
Maintenance Revenue     40.6% 15.5% 18.4% 20.6% 21.0% 23.1% 15.6% 18.6% 23.0% 14.9%
Consulting, Education &   Other Rev.     -15.5% -2.6% 27.4% 22.6% 20.8% 14.0% -3.6% 40.1% 16.6% 11.8%
% Change (Yr./Yr.)     5.2% 6.9% 21.7% 21.4% 20.5% 16.5% 9.9% 29.8% 20.6% 3.5%
                         
Appx. Organic Rev. Growth   (Yr./Yr.)     4.1% 3.9% 21.7% 19.4% 18.8% 12.7% 4.3% 18.1% 18.3% 1.3%
                         
                         
Cost of License & Sub.   Revenue   6.2 3.1 3.8 4.5 7.0 3.7 3.3 3.1 4.5 5.0 4.5
Cost of Maint. & C,E&O   Revenue   39.2 38.8 40.3 46.8 56.9 67.5 78.3 74.4 97.9 115.4 121.8
Total Cost of Revenue   45.4 42.0 44.1 51.2 63.9 71.2 81.6 77.5 102.4 120.4 126.3
                         
License & Sub. Gross   Profit   93.8 91.5 94.2 115.7 139.1 171.6 192.5 211.2 290.6 348.7 316.5
Maint. & C,E&O Gross   Profit   56.3 72.1 81.4 100.5 121.6 148.4 181.7 212.0 257.1 314.7 368.8
Total Gross Profit   150.0 163.6 175.6 216.2 260.7 320.1 374.1 423.2 547.7 663.4 685.3
                         
License & Sub. Gross   Margin   93.8% 96.7% 96.1% 96.3% 95.2% 97.9% 98.3% 98.5% 98.5% 98.6% 98.6%
Maint. & C,E&O Gross   Margin   58.9% 65.0% 66.9% 68.2% 68.1% 68.7% 69.9% 74.0% 72.4% 73.2% 75.2%
Total Gross Margin   76.8% 79.6% 79.9% 80.8% 80.3% 81.8% 82.1% 84.5% 84.2% 84.6% 84.4%
                         
R&D   45.6 47.3 49.1 42.1 51.9 66.2 68.4 73.5 98.6 121.7 128.7
% of Sales   23.3% 23.0% 22.4% 15.7% 16.0% 16.9% 15.0% 14.7% 15.2% 15.5% 15.9%
                         
S&M   86.8 86.6 94.1 118.6 134.0 152.5 171.9 186.8 238.2 267.9 292.2
% of Sales   44.4% 42.1% 42.8% 44.4% 41.3% 39.0% 37.7% 37.3% 36.6% 34.2% 36.0%
                         
G&A   20.3 20.8 20.8 20.6 23.5 30.8 32.6 37.7 40.2 48.7 52.1
% of Sales   10.4% 10.1% 9.5% 7.7% 7.2% 7.9% 7.2% 7.5% 6.2% 6.2% 6.4%
                         
Operating Income   (2.7) 8.9 11.6 35.0 51.3 70.6 101.2 125.2 170.6 225.1 212.3
% of Sales   -1.4% 4.3% 5.3% 13.1% 15.8% 18.1% 22.2% 25.0% 26.2% 28.7% 26.2%
                         
Share Based Compensation   0.2 0.8 3.0 0.7 14.1 16.0 16.3 17.9 23.4 33.3 42.8
Amort. of Acquired Tech.   1.0 1.0 2.3 0.9 2.1 2.8 4.1 8.0 13.3 19.5 22.0
Amort. of Intang. Assets   0.1 0.1 0.2 0.2 0.7 1.4 4.6 10.1 9.5 7.7 6.6
Facilities Restruct. Charges   (Benefits)   17.0   112.6 3.7 3.2 3.0 3.0 1.7 1.1 (1.1) 2.2
Other Charges (Benefits)     4.5     1.3   (11.1) (1.7) (0.5) 0.3 2.8
                         
Interest Income   5.1 3.9 3.5 7.3 18.2 21.8 14.1 5.9 3.9 4.8 4.0
Interest Expense   0.1 0.0 0.1   5.8 7.2 7.2 6.6 6.6 2.1 0.5
Other Income (Expense)   1.3 3.3 (0.1) (0.7) (0.6) 0.6 0.9 1.2 0.2 (1.5) (1.7)
                         
EBT   (14.7) 9.4 (103.2) 36.0 41.7 62.6 92.0 89.8 121.1 166.6 137.8
% of Sales   -7.5% 4.6% -47.0% 13.5% 12.8% 16.0% 20.2% 17.9% 18.6% 21.3% 17.0%
                         
Income Tax Expense   0.9 2.1 1.2 2.2 5.5 8.0 36.0 25.6 34.8 49.1 44.6
Rate   -6.3% 22.5% -1.2% 6.0% 13.1% 12.8% 39.1% 28.5% 28.7% 29.5% 32.4%
                         
Net Income   (15.6) 7.3 (104.4) 33.8 36.2 54.6 56.0 64.2 86.3 117.5 93.2
                         
Tax Impact of Non-GAAP   Adjustments             4.3 (1.9) 10.4 14.1 17.3 22.4
Rate             18.5% -11.1% 29.1% 30.0% 29.0% 29.3%
                         
Adj. Net Income   2.8 13.8 13.7 39.3 57.7 73.5 74.8 89.6 119.2 159.9 147.1
                         
Diluted EPS   ($0.20) $0.09 ($1.22) $0.37 $0.39 $0.57 $0.58 $0.66 $0.83 $1.05 $0.83
                         
Adj. Diluted EPS   $0.03 $0.16 $0.16 $0.43 $0.62 $0.75 $0.76 $0.91 $1.13 $1.43 $1.31
                         
Basic Shares Out.   79.8 82.0 85.8 87.2 86.4 87.2 88.1 88.0 92.4 104.0 107.9
Diluted Shares Out.   79.8 85.2 85.8 92.1 92.9 103.3 103.3 103.3 109.1 112.5 112.1
                         
Miscellaneous                        
Net Total Capital   4.4 53.6 (57.5) (51.6) 44.5 45.6 116.0 219.6 374.7 389.8 567.5
                         
Financial Ratios                        
Total Rev. / Employee (000's)   239 258 268 290 291 303 306 298 335 335 302
Non-Maint. Rev. / Employee   (000's)   173 163 162 177 179 186 181 170 203 201 168
Non-Maint.   Rev. / S&M Employee (000's) 471 466 470 498 504 525 511 482 593 591 491
                         
ROS   -8.0% 3.6% -47.5% 12.6% 11.2% 14.0% 12.3% 12.8% 13.3% 15.0% 11.5%
Adj. ROS   1.4% 6.7% 6.3% 14.7% 17.8% 18.8% 16.4% 17.9% 18.3% 20.4% 18.1%
Asset T/O   0.55 0.54 0.54 0.63 0.57 0.52 0.55 0.54 0.60 0.61 0.56
                         
ROA   -4.4% 1.9% -25.7% 7.9% 6.4% 7.3% 6.7% 6.9% 7.9% 9.1% 6.4%
ROE   -6.1% 2.7% -43.0% 16.2% 16.1% 20.2% 16.7% 15.3% 15.3% 14.4% 8.9%
Adj. ROE   1.1% 5.1% 5.7% 18.8% 25.6% 27.3% 22.4% 21.4% 21.1% 19.5% 14.0%
                         
ROTC   -0.7% 2.1% 3.1% 10.9% 9.8% 9.2% 11.7% 12.9% 14.5% 15.9% 13.2%
RONTC   -8.0% 19.9% -385.1% -41.6% -936.6% 101.9% 81.4% 48.5% 37.3% 38.3% 28.8%
                         
Total Debt / Total Capital   0.0% 0.0% 0.0% 0.0% 50.3% 42.4% 38.3% 29.4% 23.7% 0.0% 0.0%
                         
Days Receivables   55 57 64 64 65 64 64 72 72 75 78
                         
Book Value   $3.16 $3.40 $2.28 $2.42 $2.44 $3.03 $3.45 $4.68 $5.91 $8.82 $9.84
Tangible Book Value   $2.78 $2.37 $1.29 $1.49 $0.43 $1.29 $0.98 $1.28 $1.52 $4.40 $4.69
                         
Net Cash (Debt) per Share   $3.11 $2.77 $2.95 $2.98 $1.97 $2.59 $2.32 $2.55 $4.32 $5.35 $4.78
Ave. Net Cash (Debt) per Share   $2.98 $2.94 $2.86 $2.97 $2.47 $2.28 $2.45 $2.44 $3.43 $4.84 $5.07
                         
                         
Geographies                        
North America   145.2 149.7 156.6 185.1 226.7 264.2 297.1 321.9 431.3 516.3 524.2
Europe   46.1 47.8 55.0 73.4 80.1 101.9 122.5 129.9 162.0 195.9 202.9
Other   4.1 8.0 8.2 8.9 17.8 25.2 36.1 48.9 56.8 71.6 84.5
Total Revenue   195.4 205.5 219.7 267.4 324.6 391.3 455.7 500.7 650.1 783.8 811.6
                         
% Change (Yr./Yr.)                        
North America     3.1% 4.6% 18.2% 22.5% 16.5% 12.5% 8.4% 34.0% 19.7% 1.5%
Europe     3.6% 15.0% 33.6% 9.2% 27.2% 20.1% 6.1% 24.7% 20.9% 3.6%
Other     94.1% 1.8% 9.2% 99.1% 41.7% 43.7% 35.3% 16.2% 25.9% 18.1%
% Change (Yr./Yr.)     5.2% 6.9% 21.7% 21.4% 20.5% 16.5% 9.9% 29.8% 20.6% 3.5%
                         
% of Total Revenues                        
North America   74% 73% 71% 69% 70% 68% 65% 64% 66% 66% 65%
Europe   24% 23% 25% 27% 25% 26% 27% 26% 25% 25% 25%
Other   2% 4% 4% 3% 5% 6% 8% 10% 9% 9% 10%

 

    2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012
                         
Balance Sheet                        
Cash & Equivalents   105.6 82.9 88.9 76.5 120.5 203.7 179.9 159.2 208.9 316.8 190.1
S-T Investments   130.3 140.9 152.2 185.6 280.1 281.2 281.1 305.3 262.0 285.6 345.5
Accounts Receivable   30.0 34.4 42.5 50.5 65.4 72.6 87.5 110.7 147.5 176.1 171.9
Deferred Tax Assets             18.3 22.3 23.7 22.7 21.6 23.4
Prepaid Expenses & Other   8.7 5.1 7.8 9.3 10.4 14.7 12.5 15.3 32.3 23.2 29.4
Current Assets   274.5 263.3 291.5 322.1 476.5 590.5 583.3 614.1 673.5 823.3 760.2
                         
Restricted Cash   12.2 12.2 12.2 12.2 12.0 12.1          
PP&E, Net   47.4 38.7 20.1 21.0 14.4 10.1 9.1 7.9 9.9 16.0 145.5
Goodwill   30.3 82.2 82.2 81.1 170.7 166.9 219.1 287.1 400.7 432.3 510.1
Intangibles   0.5 5.3 2.9 4.2 16.6 12.4 35.5 63.6 77.9 64.8 67.3
L-T Deferred Tax Assets             0.5 7.3 8.3 18.3 23.0 24.1
Other Assets   0.3 1.1 0.9 0.5 6.6 6.1 8.9 8.7 9.3 21.4 5.0
Total Assets   365.2 402.8 409.8 441.0 696.8 798.6 863.1 989.6 1,189.6 1,380.7 1,512.2
                         
Accounts Payable   2.3 4.5 7.5 3.4 3.6 4.1 7.4 4.3 5.9 9.5 8.9
Accrued Liabilities &   Other   24.4 25.7 16.1 17.4 26.5 25.4 34.5 37.4 50.2 58.9 64.5
Accrued Comp. & Related Exp.   12.7 14.3 15.7 20.5 25.8 33.1 29.4 41.5 56.3 58.0 55.4
Income Taxes Payable   2.1 2.0 3.1 4.6 5.2 0.2   12.9   1.2  
Accrued   Facilities Restructuring Charges 4.8 4.6 20.1 18.7 18.8 18.0 19.5 19.9 18.5 17.8  
Deferred Revenues   51.7 51.3 55.2 69.7 85.4 99.4 120.9 139.6 172.6 208.0 242.0
Convertible Senior Notes                   200.7    
Current Liabilities   97.9 102.3 117.7 134.3 165.3 180.2 211.7 255.6 504.2 353.4 370.7
                         
Convertible Senior Notes           230.0 230.0 221.0 201.0      
L-T   Accrued Facilities Restructuring Charges 14.9 10.5 89.2 75.8 65.1 56.2 44.9 32.8 20.4 5.5  
L-T Deferred Revenues       7.2 8.2 7.0 13.7 8.8 4.5 7.0 6.6 8.8
L-T Deferred Tax Liab.           2.2     0.5 0.3   2.5
L-T Income Taxes Payable             6.0 20.7 12.0 12.7 16.7 21.2
Other Liabilities     0.4               6.3 3.5
Minority Interest                       2.4
Equity   252.4 289.6 195.7 222.7 227.2 312.5 356.0 483.1 645.0 992.2 1,103.1
Total Liabilities & Equity   365.2 402.8 409.8 441.0 696.8 798.6 863.1 989.6 1,189.6 1,380.7 1,512.2

 

Informatica                            
Cash Flow Model                            
Fiscal Year End: December 31                            
In   Millions, Except for Percentages & per Share Amounts                    
                             
    2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013E 2014E
                             
Net Income   (15.6) 7.3 (104.4) 33.8 36.2 54.6 56.0 64.2 86.3 117.5 93.2 91.5 125.1
Depreciation &   Amortization   10.5 11.2 9.3 9.2 10.1 10.5 5.6 5.5 6.1 6.3 12.3 14.7 14.5
Share Based Compensation   0.4 0.9 3.4 0.7 14.1 16.0 16.3 17.9 23.4 33.3 42.8 54.0 65.5
Amort. Of Intangibles &   Acq. Tech.   1.1 1.2 2.5 1.1 3.6 4.2 8.7 18.0 22.9 27.2 28.6 29.0 16.4
Non-Cash   Facilities Restruct. Chgs. 1.9   21.6 3.7 3.2 3.0 3.0 1.7 1.1 (1.1)      
Accrued Facilities   Restruct.Chgs.   10.4 (4.5) 94.1 (18.3) (13.8) (12.4) (12.6) (13.2) (14.8) (14.4) (24.0)    
Other     4.4 0.5   4.0   (0.6) (0.3) (1.8) (0.7)      
Cash Flow   8.7 20.5 26.9 30.3 57.5 75.9 76.4 93.7 123.3 168.1 152.9 189.3 221.5
                             
Capital Expenditures   6.9 2.6 12.5 9.9 3.8 5.9 4.7 3.3 7.2 12.7 14.1 13.6 15.3
Free Cash Flow   1.8 17.9 14.4 20.3 53.7 70.0 71.7 90.4 116.0 155.4 138.7 175.7 206.3
                             
Cap. Ex. (% of Sales)   3.5% 1.2% 5.7% 3.7% 1.2% 1.5% 1.0% 0.7% 1.1% 1.6% 1.7% 1.5% 1.5%
                             
Interest Expense (Income), Net   (5.0) (3.8) (3.4) (7.3) (12.4) (14.6) (6.9) 0.7 2.7 (2.7) (3.5) (3.2) (3.7)
Income Tax Expense   0.9 2.1 1.2 2.2 5.5 8.0 36.0 25.6 34.8 49.1 44.6 42.1 61.6
EBITDA   4.6 18.8 24.6 25.2 50.6 69.3 105.5 120.1 160.7 214.5 193.9 228.2 279.5
                             
                             
EPS   ($0.20) $0.09 ($1.22) $0.37 $0.39 $0.57 $0.58 $0.66 $0.83 $1.05 $0.83 $0.82 $1.09
Adj. EPS   $0.03 $0.16 $0.16 $0.43 $0.62 $0.75 $0.76 $0.91 $1.13 $1.43 $1.31 $1.38 $1.60
CF / Share   $0.11 $0.24 $0.31 $0.33 $0.62 $0.74 $0.74 $0.91 $1.13 $1.49 $1.36 $1.70 $1.93
FCF / Share   $0.02 $0.21 $0.17 $0.22 $0.58 $0.68 $0.69 $0.88 $1.06 $1.38 $1.24 $1.58 $1.80
EBITDA / Share   $0.06 $0.22 $0.29 $0.27 $0.54 $0.67 $1.02 $1.16 $1.47 $1.91 $1.73 $2.05 $2.43
                             
Est. Interest Inc. per Share,   Net   $0.04 $0.03 $0.03 $0.05 $0.09 $0.09 $0.04 $0.00 $0.00 $0.02 $0.02 $0.02 $0.02
                             
                             
                       Stock Price $38.35  
                       Cash & Equivalents 604.2  
                       Total Debt 0.0  
                       Diluted Shares Out. 111.3  
                      Net Cash (Debt) per Share $5.43  
                             
                       Net P/Adj. E 24.2 20.8
                       Net P/FCF 21.2 18.6
                       EV / EBITDA 16.1 13.5

 

 

 

 

 

 

 

 
INFA Glossary

 

 

Agile SW Development: a group of SW development methods based on iterative and incremental development where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams.  It promotes adaptive planning, evolutionary development and delivery, a time-boxed iterative approach, and encourages rapid and flexible repsonse to change.  This approach promotes forseen interactions throught the development cycle.

Time-Boxing: an approach where the deadline is absolutely inflexible, but the scope can be reduced as needed.  The alternative is Scope-Boxing where the scope is fixed.  When Scope-Boxing is used, additional time and resources are generally required which results in cost and time over-runs.  Time-Boxing, by contrast, requires the project stakeholders to prioritize what they want and thus determine which elements will be completed first.  My personal experience suggests that Time-Boxing is a better approach because it addresses the most important functions first and is better alligned with customer’s tendancies to realize what they actually need / want towards the middle or end of the project.  It consequently avoids wasted effort due to misguided innitial specifications.

 

Apache Hadoop: an open source version of MapReduce. It enables applications to use thousands of computationally independent computers to process petabytes of data.

MapReduce: a SW framework for processing huge data sets using a large number of computers working in parallel.  In the “Map Step,” a master computer converts the input into a number of sub-problems that are then passed along to worker computers.  (Worker computers may then further divide their sub-problems to be solved by even lower level worker computers.)  Eventually, however, the sub-problems are solved, and the “Reduce step” occurs when the master computer collects the sub-answers and combines them to form the final answer to the problem that it was originally asked.  Note that the computation in this framework is distributed among a number of processors, and they can each work in parallel.  In practice, however, parallel processing is limited by the number of independent data sources and the processors near each data source.

Appliance: pre-integrated package of hardware (processors, storage media, backplane, etc.) and SW (OS and application SW) that performs a specific function.  Historically, users cobbled together their own systems by implementing applications on general purpose hardware running general purpose operating systems.  This was complex to create and maintain.  In response to customers’ desire for turnkey solutions, vendors such as Oracle began to offer pre-packaged systems designed for specific applications, such as running a database.  By greatly limiting the number of SW and HW combinations that are available, solution deployment becomes easier, and most problems can be solved through the appliance’s management software.

Application Programing Interface (API): a specification for how software components should interact with each other.  It is usually a library that specifies routines, data structures and other variables.

Brokerless Messaging: in many systems where applications communicate with one another, the messages all pass through a “broker” which sits in the middle of the system and routes the messages.  The alternative is brokerless messaging where the applications communicate directly with each other. 

The differences between these architectures is roughly analogous to two different types of air traffic systems.  A brokered system is like a single national hub-and-spoke network where all flights fly to and from a single national hub.  A brokerless system is like a network where all the flights are non-stop.  In computing, the primary advantage of a brokered system is that the applications don’t need to know the location of the other applications—they only need to know the location of the broker.  This makes the connections easier to create and manage.  Another advantage is that the broker holds the messages and thus serves as a kind of buffer between the applications.  Sending and receiving applications thus don’t have to be available at the same time, and if the sending application breaks down, the messages can still be retrieved at the broker.  One disadvantage of brokered systems is that they require considerable network resources because the data has to pass throught the broker every time it moves between applications.  Returning to the airport example, the “data” is taking multi-city business trips, but it has to pass through the hub each time it changes cities, and this requires more flights.  The second disadvantage is that the broker becomes a major bottleneck, and the various applications might have to idle as they wait for it.  Returning to the airport analogy again, if all passengers had to pass through a single national hub, that hub would become extremely congested with frequent delays.

Database Management System (DBMS): an application used to define, create, query, update, and administrate databases. 

Data Integration: the process of combining data from different sources in order to provide a unified view of the data.  The need for data integration is growing as the number of data sources proliferates as does the number of uses for this data.

Data Mapping: the process of mapping data elements between two different data models.  For example, two different computer systems might store order numbers in different places.  Data mapping identifies those two different places as the first step of enabling those computer systems to communicate with each other.  Data mapping can be performed for other purposes as well.  One example would be identifying data relationships in order to understand the data’s lineage (i.e. “Where did this number come from?  Did it get changed as it passed through the Data Warehousing process?”).  Another example would be identifying and eliminating redundant data as information from multiple databases are consolidated into a single database.

Data Masking: the process of hiding sensitive data such as SSN’s or credit card information in order to restrict access to such data.

Data Warehouse: a central repository of data that has been gathered from various operational applications in order to generate reports and conduct analysis.  A generic example of a data warehouse is shown below:

http://en.wikipedia.org/wiki/File:Data_warehouse_overview.JPG

Operational applications such as ERP’s and CRM systems are designed for their users, but the resulting data is often needed by business analysts or managers.  In order to study this data without burdening or interfering with the operational applications, this data is usually extracted from the operational applications and moved to a Data Warehouse where it can be used for managerial purposes.  The Staging Area stores the raw data copied from the source systems.  This raw data is then integrated into a single structure inside of an Operational Data Store (ODS).  This integration process often involves cleaning the data, removing redundant data, and checking the quality of the data.  The resulting integrated data is stored within the Data Warehouse.  The information within the Data Warehouse represents enterprise-wide data, but its end users often only need certain pieces of this data.  Consequently, customized collections of data are then moved on to Data Marts for use by various departments, primarily to feed their Business Intelligence applications. 

Extract, Transform, Load (ETL): a process in which data is Extracted from a source, Transformed to meet a new need (this step can include data cleansing), and then Loaded into a target system.

ETL was INFA’s original addressable market.  Prior to 2006, this technology’s use was limited to integrating on-premise data across multiple departmental databases.

Information Lifecycle Management (ILM): data usually has a lifecycle and thus declines in value over time, though the rate of the rate of this decline varies by the type of data and the organization using it.  ILM is a comprehensive approach to managing data (and associated metadata), beginning with its creation and initial storage and ending when the data is no longer needed and is deleted.  Within an ILM system, users create policies about how long different data types should be stored on various storage media, and the ILM system then executes these policies in an automated fashion.  Newer data and data that need to be accessed more frequently are typically stored on faster, more expensive media, and less critical data is usually stored on slower, cheaper media.  ILM also allows users to keep track of where different data is located within the data storage lifecycle.

In-Memory Database: a database management system that primarily stores data within the main memory (i.e. RAM) instead of on a hard drive in order to provide faster, more predictable performance.

Internet of Things (a.k.a. “The Industrial Internet”): a network of physical objects that computers can track and manage.  RFID and Near Field Communication are examples of technologies that would facilitate an Internet of Things.

Java Virtual Machine (JVM): a virtual machine that can execute Java byte code regardless of the host computer’s architecture. 

Java was designed to have as few implementation dependencies as possible.  The goal was to enable developers to “Write Once, Run Anywhere” (WORA).  This meant that code running on one platform wouldn’t need to be recompiled to run on another platform.  Instead, Java applications would run on the Java Virtual Machine which was the code execution component of the Java platform. 

Master Data: information that is key to a business’ operation.  It can include information about customers, products, employees, materials, suppliers, etc.  Importantly, even though this data is needed by many different groups of users, it is seldom centrally stored.  Instead, it is replicated which creates the opportunity for inconsistencies and inaccuracies.

Master Data Management (MDM): the processes, policies and tools used to manage an organization’s Master Data.  Business units often operate in silos even though they may have some customers in common.  Importantly, the customer usually thinks he is working with a single company as opposed to different departments within a single corporation, and he will thus become frustrated when the different departments cannot coordinate smoothly with each other.  One example would be the checking department, brokerage department, and mortgage departments of a bank.  Each of these departments will use their own systems and will thus enter the customer’s information separately which can lead lead to inconsistencies and inaccuracies.  MDM tools can standardize information, remove duplicate information, and use rules to prevent incorrect data from entering the system in order to create an authoritative source of Master Data for use throughout the organization.

Metadata:

Definition 1: Structural Metadata—the design and specification of data structures; information about the containers that the actual data is stored in.

Definition 2: Descriptive Metadata / Meta Content—this is information or descriptions about the data itself such as what language it is in, what tools were used to create it, and where to find related data.

Middleware: SW that connects two otherwise separate applications.  It can generally be thought of as “glue” or “pipes” between other applications.  Importantly, middleware often has a “step-child” status because while it is absolutely necessary, it is typically not embraced or developed with as much enthusiasm as the applications themselves because it is much less visible and tangible.  This status probably also stems from the fact that middleware resides in-between applications.

Natural Language Processing: a field of computer science involved in helping computers to understand and respond to human’s natural language (i.e. the language people use when they communicate with each other). 

Nearline Storage: an intermediate type of data storage in-between online storage which supports frequent, very fast access to data and offline storage which is used for backups or long term storage with infrequent access.  An example of nearline storage is a tape library where a robot retrieves the tapes.  This process takes a few seconds, but it is relatively quick.  Nearline storage and archiving are a means by which to reduce the size of the online DB’s and thus improve the online DB’s performance.

NoSQL Database: a type of database that uses looser data models than relational databases.  NoSQL databases may involve some structured data which has led some developers to describe them as “Not only SQL” databases. 

Relational databases were introduced in the 1970’s, and their general architecture reflected the high cost of storage and limited data complexity of that time.  A number of developments since then have created demand for less rigid databases including:

  • The maturation of the internet which has resulted in:
    • Much higher quantities of data
    • More diverse types of data (including unstructured data)
    • More frequent data access
    • More intensive data processing.
  • Much lower costs for data storage and processing.

NoSQL databases are most often used to take advantage of of distributed computing applications that harness multiple, low-cost computers (“horizontal scaling”) instead of using a larger, more powerful single computer (“vertical scaling”).  Such horizontal scaling results in more economic performance gains and better system availability.  NoSQL databases tend to be most useful in situations where an extreme quantiy of information needs to be stored and retrieved AND the relationships between the data are less important. 

Online Transaction Processing (OLTP): a class of systems that facilitate and manage transaction-oriented applications.  In computer science, “transaction processing” refers to information processing that is divided into individual, indivisible operations called “transactions.”  

Transactions: units of work executed against the database.  Importantly, transactions are “all or nothing” actions, so they are either fully completed and thus change the data within the database or they somehow fail and result in zero change to the database. 

Query Optimizer: A query is a request for specific information from a database, and some queries are very complex.  For complex queries, there are usually many ways to access the required data, but the time required to execute them can range from one second to multiple hours.  It is thus beneficial to first ascertain the most efficient path by which to access the required data.  Query optimization is an automated means of finding a very efficient path to the data, though interestingly, the query optimizer may not find the absolute shortest path because doing so would itself require considerable time. 

Relational Database: A database that stores information in tables that can be linked to other tables.  (A spreadsheet can be used to create a relational database.)  A simple example is of such a data table is shown below:

 

Employee #

Last Name

First Name

Age

Department

103212

Smith

Peggy

35

Accounting

201352

Thomas

Bob

24

Sales

567501

Davis

Jimmy

54

Operations

Run time: the period during which a computer program is executing.  A Run-time System is SW that supports the execution of computer programs. 

SAP Business Warehouse: SAP’s business intelligence, analytical, reporting, and data warehousing solution.

Software Development Kit (SDK): a set of development tools used to create applications for a given software or hardware platform. 

Structured Query Language (SQL): a special-purpose programming language used to manage relational databases.  SQL is the most widely used database language.

I do not hold a position of employment, directorship, or consultancy with the issuer.
I and/or others I advise hold a material investment in the issuer's securities.

Catalyst

Revenue growth and margin expansion should begin to build momentum in late 2013 / early 2014, validating management's claim that INFA has a large market opportunity and that they are well positioned to capitalize on it.
    show   sort by    
      Back to top