Great disparity exists in an imperfect technology marketplace. No two companies pay the same for technology, pricing fluctuates widely, sellers have a distinct advantage over buyers and they rarely have “real-time” access to market pricing. Most purchasers ignore the potential impact of future business model changes in their negotiations.
In this multi-part series, we will review best practices to improve negotiating skills and positions when acquiring technology outlining procurement, maintenance, professional services, legal, and some miscellaneous opportunities. Technology negotiation is not about a “WIN-WIN” but more about being “TOUGH BUT FAIR” in order to provide you reasonable pricing and establish/maintain an ongoing valued-added mutually respectful vendor/customer relationship.
Software & Hardware, Pricing
The elusive search of the right price versus retail. It takes time to find the “basement”. Utilize the outlets that are available to you: research analysts, peer networking, price negotiating companies, competing offers, etc. Of course, the end of fiscal period deal can result in additional savings, however, the general rule of thumb is: the deal available today will be available tomorrow (regardless of the sales pitch).
Pricing is not just about the selling price. Try to lock in future pricing or discounts. Exceptions include, price erosion prone markets (like storage) which can always be negotiated at a later point in time. Base your payments on your acceptance and full production use, not on delivery or invoicing. Depending on the product and implementation timeline this could defer some expenses for many months. The downstream maintenance costs are also positively impacted.
Software & Hardware, Warranty and Licensing
Most vendors differentiate between warranty and maintenance periods. Extend the warranty as long as possible and ensure that maintenance runs serially to the warranty (atypical in the industry) and all the maintenance value added services are provided as a component of the warranty.
It is very easy to buy low cost servers with 3-year next business day repair/replace warranty for no additional charge. Round the clock coverage with four-hour response is an upcharge that needs to be evaluated against a low cost spare pool and reduced MTTR (mean time to repair) service levels.
There are more licensing models today than ever before. Perpetual, Subscription (XaaS), Hybrid, etc. and various license compliance metrics multi-core processor counts, user count, company revenue, etc. Staying on top of the right choices is purely a business decision surrounding your company’s business model, culture, and cash flow.
Beware of the “appliance” pricing trap. Your vendor will love to charge you for the software every 3 or so years when the appliance hardware platform is no longer supported. Ensure you protect your software investment (by far the majority of the cost) when refreshing appliance hardware platforms.
Regardless of the licensing model, address future pricing up front. Stay away from increases based on your company growth. Your company’s success should not necessarily translate to an increased income stream for your vendor. Most vendors are more than willing to allow for “organic” growth if you protect them from lost revenue due to an acquisition. Also, let accumulated subscription fees act as prepaid perpetual license fees (just in case your business model changes).
In our next part of the series, we will cover maintenance and professional service components.
Originally published in December, 2015, this blog article has been one of the most popular we’ve posted. Now, we’re also offering it in a PDF e-book format so that you can download it and share and forward to your associates. Click the link at the bottom of the article to get your copy.
Your Business and the Internet of Things:
The emergence of IoT, potential pitfalls and why you should care
There’s been a lot written about the Internet of Things (IoT), but many people don’t have a firm grasp of its current state today and how it will affect their business. In this article, we will explore the booming growth of IoT, what it means for companies now, and how your business can leverage it to drive business goals.
What is IoT?
In short, the IoT is a network. Just like the Internet connects people, the IoT connects devices. This way, a wide range of physical objects can exchange and transmit data. What this means in practice is that things like refrigerators, cars, manufacturing equipment, and HVAC can be controlled, monitored, and analyzed in much the same way that computer systems can. This can provide incredible benefits to consumers and business, allowing for increased efficiency, marketing opportunities, reduced costs, and innovative new products.
The IoT is evolving rapidly from a mere novelty to an integral part of the modern economy. Its first iteration was a prototype soda machine that could tell researchers at Carnegie Mellon University its current stock levels and whether drinks were cold. Today it encompasses a wide range of devices, technologies, and functions and is only expected to continue evolving for the foreseeable future.
The Internet of Things is truly emerging. Gartner predicts that by 2020 there will be 26 billion units and IoT related products and services will generate revenues in excess of $300B. There is little doubt that the IoT is already significant and that by in the next decade an enormous number of devices will have network connected functionality. This trend is driving substantial growth for businesses and allowing them to improve operations and develop new products, providing an estimated economic value add of $1.9 trillion across sectors by 2020.
The “basket of remote controls” problem
Unfortunately, rapid growth also poses potential problems. Businesses must be aware that the IoT is developing in a disorganized fashion. New technologies are being added device by device, vendor-by-vendor, with little to no coordination. This means that devices from different manufacturers may not be able to communicate or users may have to coordinate across several different interfaces to track all their devices.
For example, someone may have a Fitbit device, an Apple iWatch, and an Internet enabled home security camera. These devices, and others, connect according to vendor-specific protocols and technologies such as WiFi or Bluetooth which prevent their linkage under common access and management frameworks. This problem is made even worse by the fact that each vendor requires ad-hoc device configuration according to their own IP, DNS address, password, and naming standard requirements.
IoT may be exploding, but it will be several years before a standard emerges that make it easier to leverage. This situation is akin to having a basket of remotes, with each one operating a different device in the entertainment center. This state of affairs is still very fluid, even as new technology leaders are joining forces to create standards for communication between devices.
Why is this happening?
The scenario we see developing in IoT is not unique. Practically every major technology started with an abundance of incompatible vendor offerings. Early day computers operated according to vendor-specific platforms until de-facto or government standards were introduced. Before TCP/IP became the standard protocol, there were a wide range of networking options including NetBIOS/NetBEUI, UUCP, and AppleTalk. In all cases, the trend has been the same. As these technologies matured, a standard emerged to which most companies adhered. IoT is, in all likelihood, following a similar pattern of progression.
What is the future of IoT?
In order to leverage the power of IoT most effectively, companies need to understand how the new technology will likely progress. IoT will follow a trajectory similar to past technologies. This includes five distinct stages that companies should closely monitor to determine their strategy.
Hype – This is the peak of expectations for a new technology. In IoT terms, this can be thought of as the point when network functionality was added to devices primarily for a novelty factor, but the technology was not in widespread use.
Vendor driven zoo – In this transition period, vendors are beginning to realize the potential of a new technology and each racing to develop the standard. This leads to a number of competing technologies, making it difficult for consumers and other businesses to choose and use products effectively. This is the stage we are currently in.
Consolidation – This is an intermediate stage between the vendor driven zoo and standardization. There will be a decrease in the number of competing technologies, but still no uncontested standard.
Standardization – In this stage, a standard emerges and other technologies fade into the past. Companies should pay careful attention to signs of this stage to stay ahead of the curve and not get stuck with legacy technology.
Commoditization – This is the final stage of an emerging technology, when the standard is so ubiquitous that it becomes a commodity. This is the current stage of technologies like TCP/IP.
What will drive IoT adoption?
The keys to enabling IoT across a greater number of devices are pervasive networking, sensors, and actuators. These technologies will make IoT more cost effective and more powerful, expanding the scope of its viability.
Pervasive networking – In order for devices to stay connected, there must be more widespread access to WiFi, Bluetooth, and 4G/5G data. Another roadblock is the limited number of IPV4 addresses available. The protocol provides only 4.3 billion address, with many available only for special uses. As of September 2015, four out of five North American internet registries exhausted allocation of all blocks not reserved for IPv6 transition. IPv6 provides for 3.4 x 1038 unique addresses, more than enough for the foreseeable future, but the new protocol is not ubiquitous. If billions more devices are going to be connected to the Internet, there must be a broader deployment of IPV6 address protocols capable of uniquely identifying every possible device in the universe.
Sensors – The availability of low cost sensors such as RFID readers or machine recognition devices will further expand the area of applicability for IoT. As costs and ubiquity of these technologies increases, IoT will become more powerful and cost effective.
Actuators – Actuators allow network connected devices to actually do control things. Many of the most exciting IoT applications will require the addition of actuation devices that can be remotely controlled to perform specific functions.
What can companies do today?
The last thing any company wants is to end up with the Betamax equivalent of IoT. But that doesn’t mean that you should wait to act until things become clearer. An even worse scenario than picking an unpopular technology is getting left behind by not adopting any IoT technology at all. Being reactive, rather than proactive, will only lead to missed opportunities. It is unwise to wait until a competitor or disruptive new entrant takes your business.
The time to frame IoT strategy is now
Leveraging IoT in a future forward way means aggressive adoption while still having an awareness of current limitations and potential pitfalls. Start by gathering your team and envisioning how your products will fit into an IoT enabled world. What enhanced functions would it provide? How can IoT best be implemented?
Choose a participation model
There is a wide range of ways IoT can be implemented in your company. Before moving forward, it is important to think about how your company’s products and solutions fit into a network connected framework. You will need to reach an internal consensus on what participation models you will pursue and in what timeframe. The below models capture various levels of IoT implementation, with each progressively more involved than the last.
Model 1 – Focus only on leveraging the IoT reach capabilities for promotional and advertisement purposes. This primarily means tracking user activity, delivering location aware ads, and developing promotions such as automatically reminding users when their device needs to be replaced or upgraded.
Model 2 – This model involves a more proactive stance to IoT. Companies will envision and prototype current product extensions by adding IoT functionality. An example of this might include adding basic network controls to an existing line of thermostats.
Model 3 – IoT will offer many opportunities for companies to innovate and develop new products to take advantage of emerging functionality. In model 3, companies will offer unique additional functionality and products that use IoT.
Model 4 – In this model, companies are actual IoT players, rather than simply users. This means that they develop and introduce components or services to augment actual IoT capabilities. This role involves more innovation in the IoT sphere and represents the first level in which companies may actually influence the future of the technology.
Model 5 – In this stage, IoT becomes a significant part of company strategy. This involves implementation of IoT core technologies across many areas and a role in developing standards and the direction of technologies according to your business needs.
What are the best strategies for models 1 and 2?
For companies targeting the lower levels of IoT implementation, there may not be significant will or resources available to track emerging standards and develop new technologies. However, in order to stay ahead of the curve, it is important to begin investing now. Some key strategies for success at these levels include:
Stay vendor neutral – Do not lock up to a particular IoT vendor’s vision yet. It is too early to know which technologies will still be around in the next five years. Companies should instead focus on identifying potential partnerships that might provide an early adopter advantage. In particular shop around for potential IoT development houses that can help you implement IoT more effectively.
Identify component gaps – Look for areas that your company needs to work on to bridge the gap between product ideas and current technology. For example, your idea may require a new type of sensor or actuator dongle that you can develop and patent with the help of external manufacturers. This will help form the foundation for a future IoT strategy.
Begin R&D – After identifying gaps between your current technology and IoT vision, initiate an R&D effort to begin bridging that gap and prototyping possible extended functions and products.
Hire talent – Without the right people, your company cannot develop innovative, effective IoT products. To stay ahead of the competition, start hiring and developing skills now, rather than later. Keep in mind that this talent does not have to be internal. Many companies can benefit from third party consultants and outside companies to help track standards closely and to keep a pulse on the rapidly changing IoT field. It is also important to have people to consider the potential security threats that the new technology can pose and ensure that your products are safe.
Evaluate core architecture – Many companies believe that strong mobility services are all about having beautiful, modern apps. Although a nice UI helps, it is important to remember that most functionality occurs on the backend. Re-evaluate your core architecture to ensure that the central systems can provide the required business processes and data access necessary to support IoT in your business.
What is the best strategy for model 3?
Most of the recommendations for models 1 and 2 also apply to model 3, except your investment on R&D, involvement with standards, and identification of partners’ efforts will more closely mirror the recommendations for models 4 and 5 below.
What are the best strategies for models 4 and 5?
Companies targeting levels 4 and 5 likely already have a much clearer understanding of the most effective strategies and need less guidance. They should keep the advice for models 2 and 3 in mind, but take it further by getting more involved and staying proactive.
Get involved with standards – Don’t allow other companies to decide on the future of IoT. Form coalitions with other leaders and get involved today to steer standards towards technologies that will be beneficial to your company
Invest in R&D – At this stage, model 4 and 5 companies should be making significant investments in R&D and hiring staff with the skills necessary to make their goals a reality.
Identify partnerships – By developing technologies with other companies, you can help create more robust and innovative IoT offerings. Emerging technologies demand cooperation.
Although IoT is rapidly shifting and there is no clear standard, your company cannot afford to ignore it. Network connected devices will become ubiquitous with or without you, and it is important to stay proactive to monitor emerging trends and form partnerships to remain competitive. In order to benefit from the new technologies, companies must be aware of the potential for certain technologies to rapidly fall out of favor and dark horses to emerge victorious. At this stage, it is important to remain vendor neutral, but begin planning for a future in which a standard emerges. IoT is coming, by planning for the future standardization and commoditization of the technology, your company can gain a competitive advantage and drive business goals.
Did you know that ITIL 2011 has 26 process towers and you could measure more than 200 elements within those towers to gauge effectiveness of each process? As you can imagine, measuring more than 200 elements would take a staff of two or three just to collect and analyze the data.
Deciding on the meaningful elements to measure is key to the success of your ITIL implementation. One must start with a basic evaluation of the maturity of each ITIL process tower implemented. The table below can be used to measure each of the processes in place.
Does not Exist
No evidence of any activities supporting the process being evaluated.
Random activities supporting the process are observed, but no one is aware of how each activity relates to the other.
No formal documentation or dedicated resources identified to own the process.
Activities support the process but there is no measurement of the effectiveness of the process.
A tool is in place to support the process, resources are defined but roles and responsibilities are not clearly defined between the resources and other functional IT areas.
Process is defined and measured.
Resources understand their roles and responsibilities.
Processes are measured and reviewed on a regular basis.
Management conducts formal improvement planning and resources are measured on their effectiveness.
Processes are well defined, measured and continuous improvement is in place.
Linkage between processes are defined and understood by all in the organization.
Process have direct links between IT and corporate policy, continuous improvement is embedded into the process and teams.
Any organization should strive to at least achieve a maturity level 3 and have well defined processes in place for the ITIL process towers implemented. One should also review the process towers not implemented and develop a roadmap to continue to introduce ITIL processes and enhanced the processes in place to improve IT service management and operations.
What’s the “magic” in identifying the right service elements to measure which will drive value?
There are some basics that will yield the most benefit to drive the desired outcomes for your customers. These basics are considered the fundamental IT service management processes that provide the organization the necessary process framework to operate and expand capabilities.
Focus on the core: Incident, Problem, Change, Availability and Service Catalogue. WGroup recommends measuring and tracking the following:
Service Catalogue Request Management
Percentage of time a service request is fulfilled with in the expected time
Mean time to restore service
Count of Incidents by priority
% of incidents which caused lost sales, product or SLA penalties.
Root Cause Analysis responsiveness – expected time for the RCA to be created
% of SEV 1 and 2 incidents where root cause was identified and corrected
Number of undocumented/unauthorized changes
Number of failed changes as a percent of all changes
Count of changes by category (critical, standard, etc.)
Count or percent of changes that caused outages
Organizations that embrace ITIL standards and drive continuous improvement realize operational efficiency benefits that translates into overall service improvement and lower operating costs. Some organizations fear ITIL because it appears to be too complex, time consuming and costly. Service management requires an investment in tools, technology and people and should be a journey versus a destination. These five (5) process areas represent a service foundation that every IT organization needs to be proficient in order to provide good fundamental IT services.
The Internet of Things: What is in store for 2016?
The Internet of Things (IoT) is built on a network of cloud computing and data-gathering sensors. It is mobile, it is virtual, and it is soon to be everywhere. At an IT conference in September 2015, Marty Trevino, Organizational Architect and Senior Strategist for the National Security Agency was quoted as saying, “In a few years the average person will come into contact with more than 5,000 connected devices on a daily basis.” This astounding prediction doesn’t seem so outlandish when you examine current IoT trends.
Current and Future IoT Trends
With inventions like the all too popular FitBit, which tracks all sorts of personal daily exercise and movement data, IoT has quickly made its way into our everyday lives. Consumers can now wear connected technology, they can control their home’s heating and cooling systems from afar, and they can even receive alerts from their appliances about needed maintenance. In short, IoT has become a well-established member of the private sector. But what about the public sector?
Experts predict that IoT will become a large part of the public sector starting in 2016. Gregory Crabb, Acting Chief Information Security Officer and Digital Solutions Vice President for the United States Postal Service (USPS) was quick to point out that, “At the Postal Service, we’ve been looking at connected devices for over 20 years. Our goal is to take these connected device and make our business more efficient and effective.” The USPS is planning on deploying more than 200,000 mobile delivery devices to mail carriers in the near future. These IoT devices will help to improve the customer experience by tracking and recording a wealth of delivery information. From the best time to deliver packages to certain customers to the expected delivery time period, the USPS is looking forward to re-vamping the mail delivery industry with the help of IoT.
Other public sector agencies have announced that IoT solutions will continue to be explored in 2016. The reason behind the additional exploration can be summarized in a word, “data.” As seen with the latter USPS and FitBit examples, the amount of information that an IoT device can gather is staggering. The efficient and effective collection of relevant data could be incredibly beneficial to public sector agencies that are struggling with meeting key milestones on limited budgets. However, the large amounts of data collected does present a certain set of new challenges, such as the network size needed to handle the data to the security of information that is collected.
In 2016 look for the public and private sectors to adopt more IoT devices, while simultaneously conducting risk analysis to combat future IoT challenges. The amount of data that an organization can retrieve with IoT devices will continue to grow, which will require organizations to actively combat the aforementioned challenges, while also endeavoring to fully understand all of the new data. As the use of IoT expands in 2016, be on the lookout for new organizational policies and guidelines that are designed to reap the benefits of IoT devices and also protect the end user.
WGroup helps companies embrace new technology and align IT with business objectives. Visit http://thinkwgroup.com/services/ to see how we can help you with your IT transformation.
As IT professionals, our job is to align technology with business objectives, helping the business drive increased revenue and performance. That can be challenging, as for many shops, the operational burden of IT has not gone away.
Finding the balance between maintaining a quality lights-on operation, managing costs, and driving operational improvements to a higher IT maturity level is a significant challenge. The day-to-day complexity of managing an IT organization requires full attention and the lion’s share of available resources.
There’s limited bandwidth to focus on process improvements. IT has to keep pace and provide value during a time where disruptive technologies are changing the way services need to be delivered. The business expects IT to provide technology services in support of the business strategy and to demonstrate technical leadership that could influence market differentiation. These expectations require IT to provide consistent quality of service and develop capabilities that lead to technical innovation. To support this, IT must have a level of IT service management maturity to be able to manage demand and the quality delivery of that demand.
So how do you know where you are and where you need to go? First things first: conduct an IT service management (ITSM) assessment to baseline your current level of maturity, identify gaps, and develop an improvement plan. With an assessment, the end-to-end process framework and organizational capabilities are baselined to provide a platform to build upon. Developing a service-improvement plan emphasizing process improvements, capabilities, and agility will improve IT’s ability to adapt to change and sustain the quality delivery of services.
Specifically, being able to outline the key service delivery constraints by identifying root cause is a way to link the symptoms to problems and fixes that improve service and benefit the IT team and customers alike.
The illustration below is an example of a high-level assessment that outlines common IT constraints and potential impacts to the ITIL processes.
Common IT Service Delivery Constraints
Addressable ITIL Process
Delivery Lead Times
Quality of Service
Ability to Scale
Cost of Service
Lack of Innovation
Overall IT Ineffectiveness
Business Relationship Management
Service Portfolio Management
Financial Management of IT Services
Service Catalog Management
Service Level Management
IT Service Continuity Management
IT Security Management System
Transition Planning & Support
Service Asset & Configuration Management
Release & Deployment Management
Service Validation & Testing
Continuous Process Improvement
This high-level assessment provides a starting point for potential areas of process improvements. Another level of detailed analysis would be required to assess each underlying process and the details needing to be addressed (resource, process, technology). Each process in itself is as an integral part to the overall ITSM delivery model.
Periodic assessments and service-improvement plans should be routine. IT is expected to constantly improve service delivery while providing value, and not necessarily by way of long-term projects. In essence, you have to change the tire while the car is moving. An assessment will provide insight to a path forward, but time is of the essence. Using a time-to-value approach will provide guidance in setting priorities on the list of improvement activities. Implementing improvements that yield some immediate benefit (time-value) demonstrates progress while the longer-term improvement plan is implemented.
These improvements to service and capabilities are visible and will boost IT’s value to the company. Easier said than done? Sure, but worth it.
When you build a house, do you build it haphazardly without requirements or specifications? Of course not. You want to ensure that the house has a solid foundation,running water, electricity,access to public services, physical security services and zones in the house–all while adhering to specific zoning regulations in your town and state.
You may select a plot of land and have an idea in your mind on what the house will look like. You’ll draw up some diagrams, or even hire an architect to provide a set of well–designed and detailed blueprints, that will identify down to the level of detail both your “private” services and your “public” services you need to access from the town, state and even government.
The same holds true for corporations that are now venturing more into emerging technologies, such as Cloud Computing, IoT and Big Data, which are converging in many areas of the enterprise. The use of a well-defined strategy with specified services (i.e private, public leveraged and “hybrid”), and a mature Enterprise Architecture driven framework that provides building blocks that can act as investment enablers for decisions is critical for cost avoidance, so that maximum accelerated ROI is achieved. In addition, a mature Enterprise Architecture provides the guardrails to mitigate risk as technologies converge to meet the goals of the organization; so that common “services” provide a stable foundation for the “house” while these can also be leveraged to make the lives of the inhabitants better, to get more value out of their planning and building investments for the house.
Enterprise Architecture (EA) establishes the roadmap to achieve an aligned business-technology mission, based on the organization’s tactical and strategic drivers through optimal performance of its core business processes within an efficient information technology (IT) environment. Simply stated,enterprise architectures are blueprints that systematically and completely define an organization’s current (baseline) or desired (target) environment. Enterprise architectures are essential for evolving information systems and developing new systems that optimize their mission value. If maintained and implemented effectively, these institutional blueprints assist in optimizing the interdependencies and interrelationships among an organization’s business operations and the underlying IT that support operations. In the path to maturity, these interdependencies and interrelationships can be developed into services provided by IT that both support and protect a business.
The Enterprise Technology Framework, developed by a mature Enterprise Architecture organization, is aligned with the business applications and lifeblood of the organization-data,if the business objectives and benefits are to be realized while at the same time mitigating and helping to eliminate risk.
An Enterprise Technology Framework defines the technology services and functions (IT capabilities) required to support the business applications and data, including Common (or shared) Application
Services, Common Data Services, Common System Services, Network Services, Security Services, Platform Services, as well as the management tools used to support the delivery of IT service. It also helps to define the specifics for a line of business that may be required as well, or in the case of a “Software Defined” and enabled hybrid cloud model, what system or application must stay in the datacenter at the corporation (for example, “system of record” vs. what may be hosted in a SaaS or Public Model and also potentially accessed via a “system of engagement” via a mobile device, where data and analysis output can be received). This reference framework can help to define what are private or bounded service definitions, policies, and patterns as well and help to define the policies as in relation to the enterprise for a hybrid cloud delivery model, and how best to access (and secure) structured and unstructured data outside the organization, which is now being captured and delivered via technology sensors and devices across entities, geographies, and even people (i.e via wearable devices)—IoT or the Internet of Things.
Our society is rapidly developing and transforming with the explosion of data and how best to harness it for its full advantage for us as consumers of the data. Enterprise Architecture provides a
roadmap for an organization on where to best leverage existing assets for development of services to take advantage of “Big Data” and the IoT, for consumption in the digital emerging society. As such, a well-established Enterprise Technology Framework aligns with an Enterprise Security Architecture Framework to define the guardrails and protection based on regulatory and corporate policy to also help define and further develop the services.
The Enterprise Technology Framework can be used to provide a repository of agreed technology principles, standards, products and components that can be selected at system design time and implemented. It can also be used to provide a repository of agreed technology principles, standards, products and components that can be selected at system design time and implemented, and provide pre-defined combinations of implementable components, standards and interfaces. Other benefits are:
• Provides a repository of information about the technology (IT enablers and capabilities) required to support both the various parts of the business, and the achievement of the overall business
goals and objectives, which guides IT investment decisions.
• Provides a repository of agreed technology principles, standards, products and components that can be selected at system design time and implemented.
• Reduces the amount of time spent by individual development projects in the evaluation and selection of products and components.
• Provides pre-defined combinations of implementable components, standards and interfaces.
• Ensures individual systems can be integrated effectively, including the sharing of common services, functions, ‘middleware’ and data.
• Provide a known technology base for service delivery planning (capacity, performance, and availability) and measurement, to meet future business requirements.
• Provides the basis for the specification of the required (to be) IT systems.
• Helps identify, define, and further develop critical and secondary services starting at the end-user access layer.
In conclusion, mature EA helps identify what is required before embarking on large scale journeys into the technical unknown. In the absence of a defined framework of IT enablers, capabilities and requirements, many assumptions and design decisions may be made in a vacuum (especially at the project level). This can greatly increase the risk that the overall business objectives, requirements and expectations may not be met, putting the enterprise at risk, or the enterprise as a whole may miss out on greater opportunities that are only achieved at a project level. The organizations that
have a well defined and mature EA leveraging a defined framework are best positioned to embark on the journey to cloud, which will also enable accelerated and secure access to “Big Data” delivered via the Internet of Things.
“Is our data secure? Where are we vulnerable? What are you doing to keep us OUT of the headlines?”
Every CIO is being asked these questions by their Board of Directors on a consistent basis, even more so in the last twenty four months. More often than not, the intent behind this question is simply, “Are we protected from hackers?” While a comforting answer might be to describe how high and wide the perimeter wall is around your enterprise castle, such a wall does not protect you from the dangers that lie within. Shockingly, internal dangers account for greater risk than outside hackers breaching the network.
Many of the headline-making breaches in recent history are the result of an “inside job.” Take Ashley Madison, for instance. Andrew McAfee recently reported evidence that their infamous breach was the direct result of a “lone female” inside the parent company1, Avid Life Media. And they are not alone. Within the last year, both DuPont2 and P&G3 have filed suits against former employees for theft of trade secrets. Collectively, these trade secrets are essential to sustain more than $25 Billion in annual sales within the related segments for the respective companies.
In a report released by the Identity Theft Resource Center (ITRC) on breach statistics4 in 2014, the research notes that of the 760 reported breaches in the year, 37% were the result of insider threats (defined as “Insider Theft,” “Employee Negligence,” and “Subcontractors”). A lesser, 30% of breaches were the result of outside hacking.
Insider threats can be broken down into 3 main areas:
Malice. Like the cases of P&G and DuPont, this is when an insider knowingly misappropriates sensitive corporate information.
Negligence. This is when a breach is the result of a mistake by an employee. For example, an employee accidently sends sensitive information to an unauthorized party, an assistant maintains a scratch pad of executive passwords, or an employee clicks on nefarious links on the internet.
Ruse. Better known as social engineering or phishing, this is when employees are victims of intentional deception. Social engineering is a hacking technique that prays upon users sensibilities in order to gain credentials that give them access to a network. For example, an employee receives an e-mail from what looks like corporate IT asking to verify network credentials.
Preventing threats from within requires initiative across the spectrum of people, process and technology. Many firms rely heavily on policy as a primary measure of defense for insider threats. Policies are necessary, but they do not constitute adequate threat protection. Baseline measures to protect the enterprise include robust and persistent employee awareness programs, documented policies, virus and malware detection, and spam filters. However, these actions are merely proper hygiene. It is unfortunate to note that 34% of enterprises report that they have experienced an insider breach5 despite having good hygiene in place.
Insider threats are difficult to detect because doing so requires the ability to differentiate user behaviors. This challenge of detecting good and evil in this realm is quickly becoming the bastion of artificial intelligence (AI). AI is emerging as the technology with sufficient dynamics to counteract this equally dynamic threat.
Several new entrants with AI footing have entered the security space in the categories of data loss prevention and end-point protection. These firms are using patterns, analytics, and AI to identify and react to potential insider threats. A few interesting firms emerging in this space include:
Cylance (www.cylance.com) Cylance applies artificial intelligence, algorithmic science, and machine learning to cybersecurity. Using a predictive analysis process, Cylance identifies what is safe and what is a threat, not just what is in a blacklist or whitelist.
harvest.ai (www.harvest.ai) harvest.ai searches for changes in user behavior, key business systems, and applications caused by cyber-attacks. harvest.ai has applied AI-based algorithms to learn the business value of critical documents across an organization and can detect and stop data breaches from targeted attacks and insider threats before data is stolen.
Bitglass (www.bitglass.com) Bitglass Breach Discovery analyzes outbound flows through firewalls to identify high-risk activities indicating breach or exfiltration, allowing you to remediate issues quickly before any real damage occurs.
Exambeam (www.exabeam.com) Exabeam is a user behavior analytics solution that leverages existing log data to quickly detect advanced attacks and accelerate incident response. Exabeam automates the work of security analysts by resolving individual security events and behavior anomalies into a complete attack chain.
Insider threats can be detrimental to the success of your enterprise. Take action now. Protect your perimeter from the outside-in AND the inside-out. Below are three steps that should be essential to your cyber security protection roadmap.
Exercise proper hygiene. Deploy up-to-date end-point management, user access management, OS patches, virus and malware detection, spam filtering, and critical data governance.
Create security esprit de corps. This is a marketing challenge. Every employee should know the do’s and don’ts, and feel a sense of pride in protecting company information.
Deploy behavior-based detection. Technologies utilizing AI and pattern matching to detect changes in user behaviors will help uncover and prevent threats from within.
The ‘inside job’ can come in many forms. What’s important is that your enterprise security program encompass good hygiene, good marketing, and new technologies to keep your critical data locked safe inside your high and wide perimeter. Keep the hackers OUT and contain the threats from within.
4 Identity Theft Resource Center (ITRC), 2014. The ITRC defines a data breach as an incident in which an individual name plus a Social Security number, driver’s license number, medical record or financial record (credit/debit cards included) is potentially put at risk because of exposure. This exposure can occur either electronically or in paper format. The ITRC will capture breaches that do not, by the nature of the incident, trigger data breach notification laws. Generally, these breaches consist of the exposure of user names, emails and passwords without involving sensitive personal identifying information. These breach incidents will be included by name but without the total number of records exposed.
Are you allowing employee-owned devices on your network? BYOD (Bring Your Own Device) programs are a rising trend as smartphones, tablets and laptops become ever-more powerful. Your employees enjoy the convenience of using their own devices to access their work. For some employees, it makes sense to give them a stipend to use their own device rather than providing them with a device just for work.
Of course you have to be concerned about the security of your BYOD program. You really don’t have any way of knowing how secure your employees’ devices are. Could they be putting your company’s proprietary information at risk?
One way you can make sure your company’s BYOD program is secure is to put someone in charge of monitoring the program. This person enforces the security rules, which should be distributed to all employees as part of an updated IT policy.
Your security officer or team will monitor the use of your employees’ devices. Common security protocols include issuing each employee a separate password to access the company’s servers, installing GPS on the devices that the company can monitor in case a device is stolen, and an automatic shutoff protocol that will deactivate a device if it’s lost or stolen.
Anti-virus and firewall software are mandatory for devices that are used to access company information. Your company can decide which security tools are suitable and purchase them for employees. Giving the security manager or team passwords to employees’ devices also should be mandatory, so the devices can be controlled remotely, if necessary.
Decide what type of devices the company will be able to support with its BYOD program, and don’t allow unauthorized devices to access company servers. If you have employees who use non-supported devices, you’ll have to buy them supported devices at a discount. If employees choose to use these work devices for their personal devices as well, come up with a cost-sharing program for them. Otherwise, their personal and work device must be used separately.
BYOD is already a de facto standard in all types of businesses. The question for IT leaders is no longer “if”, nor even “when?” If you’re not already asking “how?” you’re behind the curve and your organization may begin to suffer.
For most organizations, IT management issues are a strategic and organizational challenge. In addition, CIO’s continue to strive for performance gains such as IT becoming more efficient, nimble, and innovative. More agility, better cost control, and better alignment with business objectives further support every organization’s objective to achieve strategic benefits. Aligning IT with the business has become a critical priority for IT leaders.
According to the Society for Information Management (SIM) and its 2016 IT Trends Survey, aligning IT with the business has become a top-ten issue. Almost half of the respondents identified alignment as a top five IT management concern for 2016.
So we ask ourselves, what does “alignment” mean for the IT professional?
Is it the traditional definition, of the fit between the objectives of the business and IT, and how well IT knows and supports the objectives, and its ability to satisfy the business requirements?
Is it less about the alignment but rather more about how cohesively IT and the business team or partner together to accomplish enterprise objectives?
Our point of view on aligning IT with the business
At WGroup, we see alignment as a collective effort on the part of both business and IT. Where alignment confusion comes in is when the IT organization has difficulty in responding to business and technological changes. Furthermore, we do not see it as technology’s responsibility to become aligned, and collectively stay aligned, but rather it should be a cohesive partnering between business and IT to form cohesive teams, partner and share responsibilities on a strategic level as well as jointly share responsibilities on tactical technology execution level.
What to do
In our experience, we have supported organizations with both organizational change and “running IT with the business”. In both instances, we promoted and encouraged clients to take a top-down strategic approach. This approach includes establishing and institutionalizing five basic portfolio management techniques:
Break down the silos. Be aware of organizational functions and staff titles, but, take down the organizational barriers between IT and the business and instead encourage the establishment of cross-functional teams, with accountability for collective objective and project goal accomplishment
Convene a strategic group of stakeholders. Establish a forum (or forums) with representatives that include internal (business, technology) or external partners to drive ownership, accountability and oversight of technological change
Establish the rules of the road. Be prepared to improve on enterprise collaboration, the collective ways and guiding principles in which teams come together to accomplish firm-wide and initiative objectives
Establish role-based teams. Drive execution activities for project consistency. This includes establishing portfolios of projects, resources and combined teams with clearly demarcated accountable roles and responsibilities
Focus on deliverable milestones. To ensure consistency of what to accomplish, and by when, clear milestones have be created, measured and actioned
What to expect
Taking this approach typically yields the following benefits:
Significant increase in managing business expectations
Improved engagement and cooperation from all stakeholders
A lot has been written about the watermelon effect in outsourcing, the phenomenon that occurs when SLAs look good (green) on the outside, but on the inside they’re actually problematic (red). How can CIOs and other IT executives avoid this problem? What are the strategies that help companies understand and evaluate SLAs so they can reach more agreeable terms with suppliers? In this blog post we’ll explore the root causes of the watermelon effect and discuss strategies to help avoid problems.
Understanding the root cause of watermelon effect
The watermelon effect generally occurs for one reason: poorly-defined metrics. Companies often have contract relationships with SLAs that are tracked monthly with certain goals and penalties if those goals are missed. In many cases, suppliers meet the defined SLA targets but still aren’t able to meet business objectives. Reports might look good, but senior management is still getting negative feedback from customers and users. This can be career-threatening for CIOs. Companies are continuing to rely on using industry standard metrics to evaluate contracts, but it just isn’t working.
Measuring based on business value
In order to solve the watermelon effect, companies need to change the paradigm for SLA metrics. Rather than only considering traditional specifications like responsiveness to incidents, component availability, and service restoration, consider each element in terms of how it affects the business. This will allow companies to measure based on factors that have a real effect on business goals, allowing them to come to more effective agreements.
• Don’t just use industry standard metrics – Industry standard metrics are important, but they aren’t enough. It is important to develop metrics to suit each individual company’s unique business needs. But be sure newly-defined service levels are realistic and measurable. Identifying more relevant key performance indicators to spot trends is critical to successful supplier agreements and to the company.
• Stay proactive – All companies should regularly evaluate their SLAs to ensure they are helping identify trends, problem areas and accurately reflect business realities. When it’s time for new contract negotiations, this information can then be used to develop stronger SLAs.
• Implement governance structure – A strong governance structure for managing SLAs is important. By building dedicated frameworks for analyzing SLA performance, creating new metrics, and negotiating contracts, companies can help ensure that performance remains strong. This can normally be accomplished using existing contract management or service management staff.
• Examine contracts holistically – By including IT, business leaders, and other departments during contract development and negotiations, company collaboration can help ensure that agreements meet the needs of the whole company.
• Be more aggressive – Many companies are simply too passive when it comes to drafting contracts. All organizations should strive to fully understand what is being agreed upon and to ask for the provisions they need to meet business objectives. Clearly define each metric, the measurement data sources, tools, calculation, frequency and business priority weighting.
Negotiations with suppliers are a primary part of the IT organization’s role in a company and should be treated as such. It’s time to develop better ways to work with the business and address problems with SLAs. Smash the watermelon effect by defining and measuring SLAs based on their ability to meet business objectives – that is a true value of IT!
In the last post of this series (Part 1), we introduced the idea that IT service contracts were more difficult to understand and talked about why outsourcing costs are such a critical part of today’s IT strategy. In this post, we’ll take a deeper look at why contracts have become more complicated and what that increased complexity means for your organization.
What’s driving increased contract complexity?
Today, unit price points can range from fifty to well over a hundred individual items and contracts are often extremely difficult to understand. This can make deciphering contracts and evaluating their potential impact on performance and business challenging, but what has caused contracts to become more complex?
Multi-tenant environments make pricing less clear
One of the major shifts driving the change is the popularity of multi-tenant environments. This trend has many technical and operational benefits for IT departments, and can even lead to better pricing terms, but it also often makes the IT price comparison and variable management more complicated. As IT services shift from dedicated infrastructure, computing, and storage to shared resources, pricing models have drastically changed.
Generation one contracts primarily dealt with dedicated infrastructure, which provided straightforward price points. Now for example, cabinets and top-of-rack costs are apportioned to multiple VMs and images, spreading out the cost and making them harder to understand. Tape operations are rapidly being replaced by VTLs, with infrastructure costs often shared across organizations or between departments.
In first generation IT service contracts, clients could much more easily understand the effects that changes in a technical environment would have on costs. With resources shared between many organizations, this becomes much more difficult to predict. It also makes finding the optimal price to performance ratio for your organization challenging. Pricing models have been adapted to reflect changing service models, and companies must stay proactive to understand how this change will affect them.
Today, to reflect the implications of multi-tenant environments and the commoditization of IT services, pricing is affected by a range of variables, including:
Fixed Cost Elements
Variable Cost Elements
Tiers of Service per Element
Degree of Standardization
Service delivery partners aggregate prices
Price aggregation might seem like it would make IT outsourcing agreements simpler, but it actually has the opposite effect. Because pricing has become more complex thanks to multi-tenant environments and other variables, aggregation simply serves to confuse the issue even further. Hiding several variables behind a single price point makes benchmarking and market price analysis much more difficult. Clients simply have to work harder to find out what they are actually getting. Motives for doing this are also usually self-serving for service partners, as price aggregation reduces areas of contention with a client when prices increase.
In today’s IT services market, companies must have the right knowledge and guidance to understand how their outsourcing agreements will impact their business. In the final post of this series we will further explore IT outsourcing agreements and pricing by taking a closer look at how companies can manage the increased complexity of IT service contracts and develop strategies to ensure that they reach an agreement that meets their business needs.
WGroup’s consultants have decades of experience navigating the most complex outsourcing agreements. To learn more about WGroup’s sourcing advisory services and how we can help you turn IT outsourcing into a strategic business enabler, visit our Services Page.
This is the first of a multi-part series on how the increasing complexity of IT outsourcing engagements is creating the need for IT leaders and sourcing professionals to understand and work with a new generation of pricing models. Visit The Journalto stay up to date with strategic sourcing and IT leadership topics.
As IT services evolve and the wealth of service offerings increases, IT outsourcing agreements are becoming increasingly complex and difficult to manage for many companies. This can often lead to organizations paying too much for their IT services or not getting the service they need. Having an effective pricing strategy to manage outsourcing agreements is a critical component of IT success. In this post, we’ll discuss the trend of increasingly complex agreements and what it means for your business.
IT outsourcing agreements are growing in complexity
Today’s outsourcing agreements are very different from those of the previous generation. As dedicated infrastructures make way for complicated multi-tenant environments, many organizations are struggling to effectively manage their IT contracts and pricing strategies. It is not uncommon for a current generation contract to include 50 to 100 — or more — individual price points. New pricing structures can take into account various tiers of service for each scope element, aggregate price points, organization financial structure, and the degree of technology standardization. Relating this complex array of factors to their potential impact on financial performance and forming an effective cost/benefit analysis can be challenging.
This trend is further driven by the increased use of multi-sourcing strategies to leverage best-in-class providers for different elements. Organizations must now manage an increasing number of contracts, with varying pricing structures that can conflict with overall IT planning targets and business requirements. This adds additional layers of technology complexity and new requirements for outsourcing pricing strategies and requires a greater management focus.
IT outsourcing pricing constructs are an integral part of IT strategy
Although price should not be the primary driver of IT strategy, it is a fundamental component of it. The IT department strives to deliver the most effective applications and services to the end user at the best possible cost. Understanding outsourcing agreements and pricing best practices is a critical part of overall IT strategy. Building key metrics into SLAs, regularly evaluating performance, and having a clear comprehension of every service being delivered are all key factors in the process. This allows business to better understand the dynamics of price and how pricing can impact financial and operational performance. It also allows executives to align IT objectives with business goals and achieve a more harmonious balance between transparency and complexity. Organizations must strive to gain perspective on the changing pricing structures and use that understanding to deliver services and value to the end user more cost effectively.
In the next post of this series we will further explore IT outsourcing agreements and pricing strategy by taking a closer look at the root causes driving increased complexity and how they can affect your organization. Visit The Journalto learn more.
With the TBM conference coming up in the last week of October, CIOs have more reason than ever to consider how new technology can help them better manage their departments and align their goals with the business goals of their organizations. Recent statistics released by CIO Magazine show that an incredible 54% of business executives viewed the IT department as “an obstacle to their mission” and 47% said the CIO was fighting a “turf war” with at least one other C-level executive. Clearly many IT leaders are struggling to coordinate with the business at large, leading to disconnected priorities, inefficiency, and missed opportunities. We think it’s time to not only repair this disconnect, but leapfrog over it into a rethinking of IT. The TBM conference is one way to start accumulating this kind of thinking. (And contacting WGroup for a scan of your opportunities is another.)
What is TBM?
TBM, or technology business management, is a relatively new category of SaaS solutions designed for CIOs and IT departments to help them manage their businesses. These tools allow IT Leaders to run their businesses with the accuracy, transparency, and efficiency that modern solutions have afforded their peers, allowing them to more accurately justify costs, optimize efficiency, and align with business goals.
Why should TBM be on my priority list?
1. It is a critical enabler to running IT “as a business.”
IT has unique needs and goals that are not met by generic business management tools. TBM offerings seek to fill this gap by providing purpose-built solutions designed specifically to help IT Leaders run their business. This allows them to move from traditional, often ineffective options, such as spreadsheets. These tools recognize the core truth that IT, like other departments, needs to manage and report costs, performance, and future projects. In order to do this effectively, IT Leaders need solutions that take into account the unique metrics, practices, and challenges of IT. TBM solutions allow IT Leaders to oversee vendors, manage benchmarks, and track project costs to ensure that their department is run efficiently.
2. It is essential in justifying IT spending to the business.
Spending is often one of the greatest areas of contention the organization at-large will have with the IT leader. And because many of the benefits of investment in IT can seem intangible to the average user, many business leaders are left wondering exactly where all that money is going. TBM solutions give IT Leaders the power to more easily show the organization exactly how resources are being spent, to prove that they are having measurable impacts on performance. This is essential in providing the IT leader with the tools they need to maintain their current budget, or even ask for more funding.
3. It facilitates the development of stronger partnerships with the business.
Today, IT is no longer a department with a supporting role, it is often a major source of revenue and competitive advantage. IT enables the primary channels for interfacing with customers, and holds responsibility for boosting productivity within the organization. As the IT department becomes an ever more integral part of daily business functions, it is critical that IT leaders increase their efforts to coordinate their own goals with the goals of the business at large. TBM solutions allow for greater transparency in the IT organization, allowing IT leaders to more effectively align their priorities with those of their business partners and giving them the tools they need to work together to achieve broad business objectives.
If you’d like to learn more about the benefits of TBM, be sure to check out the 2015 TBM Conference October 26 to 29 at the Hyatt Regency in Chicago.
To learn more about WGroup’s advisory services for strategic IT transformation, visit our services page.
Providers of outsourced services love to talk about how much money they will save you on salaries and benefits, and they should, considering how significant those expenses can be for most organizations. But unless you understand and quantify—and then negotiate for—the improved business outcomes that come with outsourcing, you could easily find that your bottom line was better served by permanent employees providing those services.
In the case of IT services, salaries are easy to quantify, the business value of those services less so. So you need to approach the issue of outsourcing IT services very carefully, making sure that your corporate priorities are served, and understanding how the value of IT services can change with outsourcing.
Most importantly, you should view outsourcing as an opportunity to increase the quality of IT services across your business. This means choosing a program lead who has a deep understanding and appreciation of IT services’ benefits and potential. That doesn’t mean, however, that the most technically proficient IT manager would be your best choice. You need someone who can also manage customer relationships, and who understands the outcomes that your business wants. The lead must be an outstanding communicator with potential IT services providers. Choose someone who can enable the provider to understand how the IT services align to business outcomes, and who will manage the provider as a strategy partner – not a transactional supplier.
And what are those levels? To begin, consider what services you’re currently overpaying and underpaying for. Many businesses are overpaying for in-house IT administration and support functions. Think also in terms of IT services on an as-needed basis to lower costs, get a higher level of expertise, and 24/7/365 coverage. Paying for performance also has multiple advantages over paying for specific services.
On a more subtle level, consider the costs that many outsourced providers bring along with them. Will they be constantly upselling (does the engagement team carry a sales quota or a customer satisfaction objective)? What are your costs for travel and training on your systems? Employee relationships and corporate morale will benefit if you retain as much of your IT staff as possible and then train them to provide your most valuable IT services. Consider the value of continuity to your IT services and staff them accordingly.
Regardless of the service provider’s security capabilities, their employees simply do not have the inherent incentives to keep your data secure that your employees have. So craft an SLA that minimizes the opportunities for the vendor to compromise your data, even inadvertently.
Two final points: First, be sure that your providers understand what they are doing right and that you appreciate it. Anyone in a service-providing role is naturally tuned to providing more of what their clients value most. Second, be tough. Insist on the business outcomes that drove you to outsource in the first place.
When going to market for services and engaging new service providers there is tendency to focus on value and often times supplier transition is not considered as important as price, performance or added value. Picking a supplier that has a proven transition performance record as well as assigning an experienced IT transition project management team is critical to service transition and service delivery success.
Here are the Top 5 Service Transition Risks and Mitigation Strategies:
1. Risk of: Schedule Delays
Schedule delays drive up transition costs and expected benefit realization is delayed.
Mitigation Strategy: Develop a transition strategy that outlines a stepwise schedule approach that tests transition readiness increasing scope and complexity as specific milestone criteria are met.
2. Risk of: Service Costs
Service costs increase after contract due to “hidden service demand”
Mitigation Strategy: Employ a rigorous due diligence approach to understand current demand drivers and service management requirements. This approach aligns in-scope baseline costs against time and material out of scope costs and accurately forecasts future service costs ensuring that accurate baselines are established with the service provider during contract negotiations.
3. Risk of: High Demand Skill Sets
High demand skill sets that are difficult to source in the market can impact service providers’ ability to staff.
Mitigation Strategy: Evaluate the relative complexity of the current and future business processes to identify the specialized resources needed to support day-to-day operations. Make specific HR adjustments to mitigate the impact of critical resources leaving the organization (if applicable).
4. Risk of: Quality of Service (QOS) Degradation
Quality of Service (QOS) degradation after contract execution
Mitigation Strategy:Establish service level management best practices that incent the most important service objectives for the business. Activate governance in the first month during transition and establish governance procedures. This approach minimizes QOS degradation and provides contract mechanisms for financial relief.
5. Risk of: Managing Service Provider Effectiveness
IT and the VMO struggles to manage service provider effectiveness and is unable to remediate service provider performance
Mitigation Strategy: Build an effective contract governance model that will ensure that:
IT retains adequate control of their IT services
Responsibility and commitment are accepted values between the service provider and client
Performance and improvements are continually measured in quantity and quality
Both partners strive to enhance efficiency and excellence
Partners cultivate a stable and trustful relationship
Each of these risks should be addressed as part of the service requirements when sourcing services and should be factored in as part of the supplier selection process. Mitigating transition delays is essential in integrating a new supplier as well as minimizing service interruptions that can be caused by a poorly planned transition.
As IT executives, we are all too familiar with the constant cost pressure and the tactical measures required to manage IT operational budgets. The focus tends to be on lights-on operational expenses and areas like resources, procurement, travel and project investments become the targets for reductions. If this is the focus you miss a great opportunity to mine for real savings and improve IT performance.
These target reductions tend to be short-term focused and are intended to affect end-of-quarter cost objectives. An area that can have a significant impact on IT operational cost is your services contract portfolio. Although longer-term focused, conducting an assessment and evaluating the structure, terms and performance of your contract portfolio could uncover substantial cost reduction opportunities.
There is constant “churn” with suppliers, agreements and scope of services. The dynamic nature of managing the stream of external service demand can lead to inefficiencies that equate to increased IT costs.
The challenge in managing a dynamic, evolving supplier contract portfolio is compounded by trying to maintain consistent process discipline when there are several contributors participating in the sourcing process.
With this challenge comes an opportunity. The opportunity to analyze the contract portfolio and mine for cost reductions. These opportunities could come from several areas:
1. Consolidation – leverage price reductions by increasing volume to suppliers who provide the most value.
2. Re-bid – avoid the path of least resistance to renew contracts when they come to term. Reestablish requirements and go-to-market to improve service value.
3. Contract audits – review active contracts for need, terms, scope, performance and consumption to identify unnecessary spend.
4. Benchmarking – compare current pricing against market pricing to identify price adjustments opportunities.
5. Contract and service structure – there are several areas that could improve cost control when developing your next service contract: consumption-based billing, factoring in demand drivers, no penalty termination, shorter-term agreements, business objective alignment based on outcomes, service level agreements with service termination and service provider flexibility to name a few.
Consider taking two steps of action:
1. Conduct an assessment of the contract portfolio to identify “low hanging fruit” opportunities. Build a plan, drive to completion and harvest savings.
2. Commit to establishing or improving your vendor management function, building organizational capabilities and establishing a set of polices, standards and processes that will provide a process framework for managing the lifecycle of supplier contracts in a consistent manner.
Following these actions will not only increase IT performance, but deliver cost reductions. A well-established vendor management organizational (VMO) practice can be self-funding and provide significant value to the organization. Active contract portfolio management combined with establishing standard processes will support VMO funding by way of cost savings and cost avoidance.
End user services are constantly evolving to address important items such as mobility, BYOD, virtualization and service component standardization. Aligning End user services to meet the evolving requirements generated by social, mobile, analytics etc. requires a focus on adapting to rapidly changing client requirements. Many companies choose to outsource end user services to drive lower servicing costs. In our view, this should not be the only goal. Transforming end user services as the centerpiece of IT service management (ITSM) operations is key to realizing the most important benefit; improved IT service quality across the business.
End User services is an area where IT can truly deliver phenomenal value to their customers. To make this transformation with your service provider, (many of these strategies will apply even if you don’t outsource these services) it’s important to focus on a few key strategies:
Focus on ‘left shift’ incident and service request management; Enable self-service; develop and drive use of knowledge bases (FAQ’s etc.), automatic software delivery and automate as many repeatable tasks/requests as possible
Improve the information security and data loss prevention capabilities and services across all end user devices
Drive “zero-touch” desk side support through diagnostic trees and remote control
Staff a “Techie Bar” in a popular place for ad hoc mobile device support.
Use six-sigma and total quality techniques such as Pareto and root cause analysis as a means to drive down service ticket volumes and costs
Use industry accepted and easily integrated service desk tools such as ServiceNow or Remedy to manage incident, request, change, configuration and asset management processes
Minimize the number of ‘Gold’ images and automate the software distribution, release management, and re-imaging processes
Choose the location for off-site / off-shore service desk personnel for language neutrality and full 24×7 coverage (follow the sun); In recent years, Service Desk locations have shifted to the Philippines and Singapore, due to their better English-speaking capabilities, and increasingly, other languages are supported in eastern European countries such as Czechoslovakia, Hungary, Poland, etc.
Improve the quality of staff assigned to your account. Demand a say in the hiring and evaluation of service desk resources to ensure/certify technical and language proficiency
Contract for market-current resource unit baselines and business-focused SLAs based on service provider motivation to optimize both service quality and price over time
Define and manage SLAs to drive behavior
These strategies, along with other best practices, can help drive toward achieving key outcomes such as improved; customer satisfaction, cost position for service desk operations and business perception of IT service quality. Deciding which of these strategies work best for you is the biggest challenge, WGroup has years of experience in helping to determine what’s best.
In today’s dynamic technology environment, sourcing is increasingly utilized as a key component of global IT service delivery. As sourcing is a highly specialized and complex process, many IT and business executives consider hiring a specialized sourcing advisory to support the development of a sourcing strategy, the execution of an insourcing initiative, or the management of an outsourcing transaction.
The role that a sourcing advisor plays in the strategy development and orchestration over the sourcing process is vital to achieving the desired business outcomes. Conducting a thorough analysis of sourcing advisory firms is key to finding the best fit firm for your project.
Developing an RFP is one way to evaluate capabilities of sourcing advisory firms to support your project. There are two key components of a Sourcing Advisory RFP – information about your company, and information that you should request of all advisory firm bidders. To break it down we’ve highlighted some specific recommendations across both of the dimensions.
What to share about your company:
What your company is trying to accomplish and why you are doing this project. Explain your chief outcomes from sourcing.
Any recent transformation, consolidation, and change efforts impacting your operations.
Any forecasted transformation, consolidation, and change efforts in the 1-2 year horizon.
At a high level, operations at your company and which business units your IT organization supports. Are global operations covered? Are foreign entities involved?
Major software packages, systems, and applications you utilize.
The size/magnitude of your IT operations (budgets, locations, employees, and any more granular information if readily available)
The scope of IT services that you want evaluated
What your infrastructure looks like (Mainframe, UNIX, Linux, etc…)
Your network (provider, insourced/outsourced, domestic/global scope)
Any key business cycles if any should be considered (spikes in demand, etc.)
What work if any you’ve already done in thinking about a sourcing strategy (core vs. non-core analysis, financial analysis, risk, culture, etc…)
Your budget constraints or ballpark cost if you have one in mind.
Any timeline constraints (contract expirations, renewals, key milestones, etc…)
A desired timeline on the Advisor Selection Process (when questions are due, when responses are due, when on-site presentations will take place if requested, when a decision will be made, etc…)
What to ask of all bidders responding to your RFP:
Their methodology including key steps and deliverable types
Their philosophy around sourcing and advisory services (cost focused, transformation focused, outcomes focused, template driven, custom, etc…)?
The focus of their firm (procurement staff-augmentation, management consulting, IT strategy, etc…)
How their methodology enables comparison of costs to market, and addresses evaluating the performance, risk, cultural and process dimensions for sourcing.
Their knowledge of the service provider space
Their staffing model – how many consultants would be on the project? How is the engagement team structured? Will they be onsite or remote?
Resumes or biographies of the proposed on-the-ground resources. Be explicit that you want to see the actual engagement team, not a sales or partner-level oversight team Information about the team’s experience in working in sourcing advisory engagements A detailed project timeline
A not-to-exceed cost, or a fixed price bid, with documented assumptions.
A description of time requirements from your staff to support the process (both from which roles/levels, and how much time throughout the key phases)
Disclosure of any financial relationships or partnerships with outsourced / managed service providers (including research sales, event/website sponsorships, workshops, client relationships in other business units, and provider-focused consulting)
At least 2 executive-level references and case studies
ROI is a term from the investment world, the operational mantra of which is risk management. Maximizing the likelihood of positive return on any investment requires discipline throughout all stages of the investment cycle. Investments into Big Data are no exception1. Take the starting point, for instance. An astute investor would not be investing in all the ‘good’ opportunities in front of him at once, even if they all pass the initial sniff tests. You must be selective — to align with your investment style and risk tolerance. Like dabbling in any new arena, you would likely start out small and incrementally up your bet if the first forays prove fruitful. During execution of the project, you may well acquire new staff (skills), establish new controls (processes), and buy new tools (technology) to monitor it, which calls for rigorous requirements definition. As the project matures and grows, the number of stakeholders, curious onlookers and potential ‘second-wave’ investors increases, so a governance structure would be needed to manage demand and trial usage of the project’s outputs and that should be part of the plan at the get-go, not an afterthought. Finally, since all the skills2, infrastructure and technology most appropriate for capturing the investment’s benefits are not likely to be resident in-house long-term, some sourcing strategy alongside the investment plan should be in place as well.
In short, a Big Data investment cycle should cover the following steps:
Prioritize Big Data Analytics (BDA) opportunities
Identify specific target use cases
Quantify benefits and risks
Identify executive sponsors
Assess readiness of the people to be involved
Definition of roles
Define the end-to-end processes for the Big Data use case
How multiple departments collaborate
Data storage/retention practices
Data security considerations
Controls (e.g. HIPPAA, PCI)
Clarify the needed technology
Establish governance early on
Select sourcing channels and partners
With the rapidly advancing ubiquity of Big Data use cases, most organizations would soon feel compelled to develop some level of BDA capability. If BDA were to become the next must-have competence for most IT groups, as it is poised to be, a strategy to build up such competence is required. After all, there is no other way to learn a complex skill than trying-failing (partially)-learning-and trying again.
1 According to an Infochimps Inc. survey, 44% of Big Data initiatives got scrapped, apparently due to poor planning
2 The McKinsey Global Institute estimated that data analysts demand will outstrip supply by 50-60% by 2018
It is essential that innovation have its own unique focus within the contract and vendor management system. While innovation itself should not be the primary focus for the Vendor Manager, the way that Service Level Management is, it should be an important priority.
It is always best for the Vendor Manager to develop a strategy as to what you want to accomplish through innovation, so you are better equipped to recognize the prospective innovation when you see it. Here are some examples of what others have set out to accomplish:
Requiring innovation as part of the managed services contract seems innovative for us when compared to the way we normally operate and I want to capitalize on it.
I want to use innovation to ensure technology currency including better readiness for the unanticipated technology changes.
We’ve recognized that we don’t have time to be creative anymore and I want to use innovation to stimulate creativity within the team.
My Managed Service Provider wants to grow with us so I want to use the innovation initiative to identify opportunities for joint investment in research, education, etc. that can fuel growth.
Managed Service Providers (MSPs) are not always motivated to innovate since the typical innovative idea tends to make quantum leap changes to current processes and support dynamics, otherwise it would not be considered innovation in the first place. Those quantum leap modifications by their very nature will often cannibalize at least some component of work for which the MSP is currently responsible. The most utilized approach by Vendor Managers to address Innovation is to jointly plan with your MSP(s) to incent innovative ideas that do in fact get accepted and implemented, and the most successfully implemented management systems negotiate a form of a “gain share” as both the incentive and an award for the MSP once the innovative concept or idea is implemented, and its actual impact is measured and reported.
When thinking about leveraging innovation, one good exercise is to sit down and map out an innovation qualification process that could be used with your MSP’s ideas or nominations. Be sure to weed out those nominations that are merely continual improvements on existing services. You’ll be surprised at how well this exercise can illustrate your philosophy re innovation.
Real innovation is hard to come by, yet when it is identified and implemented within the context of a Managed IT Services Contract it can yield unprecedented value to the business from IT Services initiatives.
It is critically important to develop and maintain a balanced management system that pays strict attention to the IT Services Management fundamentals, while at the same time the creative thinking that can bring innovations with significant and lasting impacts to the forefront. By utilizing various techniques and management system prioritization, the Vendor Manager can play a key role in achieving innovation.
Probably the most disruptive trend affecting IT is the responsibility for IT is moving to the business. Primary drivers of this are the prevalence of technology in business services and solutions as well as business manager’s growing comfort level with technology. WGroup sees this as an important inflection point for CIOs requiring attention. If the trend is not dealt with effectively there is a potential for a loss of control of critical IT functions. In turn, there could be a major impact on the enterprise from a risk and performance perspective.
In response to this trend WGroup recommends significant changes to the IT management model and operational processes. These changes are mainly focused on governance, service management, innovation and sourcing.
While all this change is happening, it is important however, that certain aspects of the traditional IT responsibility base should not change. The following core responsibilities should remain centralized and under the purview of IT and the CIO:
Security – Development and enforcement of information security policies, principles and frameworks. Maintenance of system security privileges and the associated tool set, periodic audit of access privileges.
Architectural Standards – Development, maintenance and communication of acceptable operating environments and technology standards. Oversight of the exception process.
Infrastructure Services Management – Including management of the wide area network and telecommunications, corporate data centers whether insourced (management) or outsourced (oversight) and end-user computing services.
Data Management – Coordination of data management processes, oversight of data lifecycle activities, arbitration of business-related data issues, and management of the data tool set.
Centralization of these functions within a highly skilled organization serves to protect the enterprise’s assets from undue risk while leveraging volume and scale to provide the most efficient computing environment. It is the CIOs responsibility to make senior management aware of the trend and promote the required changes to the management model. By being proactive, the CIO demonstrates an appreciation for the pace and need for change while maintaining a complementary appreciation for the well-being of the business.
IT and business services are increasingly being fulfilled across a network of suppliers. As companies become more experienced with outsourcing they start to seek best-in-class suppliers to deliver services, whether within the same function (e.g. , IT infrastructure), across multiple functions (e.g., Finance, HR, IT) or across multiple geographic areas. The upside of this is that you gain much broader knowledge and expertise from across the industry, while keeping all the suppliers on their toes. The downside? They don’t always play nice together like a perfect team.
To get the most out of a multi-supplier environment requires:
Coordination and transparency of decision-making across all suppliers
Multi-supplier governance being made a top priority of the retained IT staff’s competency and management focus
Clearly and consistently defined supplier roles and responsibilities within a service delivery process, including the use of formal cross-supplier procedures (CSPs) and Operational Level Agreements (OLAs)
Mechanisms to track and ensure that all services and underlying supplier activities contribute to the Business Value of IT investments
An organizational culture and atmosphere conducive to supplier collaboration in service delivery, while providing defined areas for competition. They should feel secure enough to share innovation ideas while their IP is protected from competitors
Ensuring accountability for decisions and outcomes across all suppliers during service delivery. Granular, service component-level metrics plus end-to-end service level commitments with shared penalties and rewards are the key
The best way to meet these requirements is by establishing a Multi-supplier Governance Model that provides the organization with a framework and discipline. By establishing this model, you can expect to enhance overall service quality rather quickly:
Service suppliers will start viewing themselves as members of an interrelated eco-system. They will start working with each other on a first name basis without client intervention
Service customers’ experience improves drastically, as end-to-end accountability and performance management are promoted. Services will be optimized globally, not just locally
Suppliers will bring added value to the entire IT service delivery chain via collaboration and joint innovation
Overhead on IT governance will be lowered via institutionalizing and streamlining the interaction among suppliers – organized meeting cadences, planned participants, agenda, focused point-of-contact etc.
Business risk is reduced by end-to-end services that actually meet business needs (rather than just meeting IT contractual commitments). There will be faster incident and problem response and resolution times
Near real-time reporting and dashboards reflecting true cross-supplier performance becomes possible
CMO’s are less interested in historical information and information generated through business operations then they are in real time, external information about customers, their perceptions of the company, the products or services and the competition. Gartner and many other research firms are predicting the CMO will spend more on technology in as little as 3 years. With that in mind, CIO’s need to be able to offer value to the CMO.
From a CMO’s standpoint, of course profitability is a good success measure but daily sales or on-time delivery performance isn’t as compelling as real-time, market-based, decision-driving information. Even traditional media / market share data, IRI, Majors etc. combined with wholesale shipments or Point of Sales data is less compelling than in years past, given the wealth of direct information that customer and users of the product or service post on a minute by minute basis in social media. Statistics on visits to the website and how customers and prospects are navigating are not nearly as compelling as what customers are saying, what’s trending and how that is relevant to what the company offers, how they market it and how that translates into brand image and loyalty.
CMO’s today have to work both sides of the company image issue:
They need to play defense against social media sneak attacks, like the Domino’s Pizza YouTube post.
They need to be conscious of all the places where negative real-time feeds can go viral and drive image and consumer perception which can change buying patterns overnight.
Big data searches of Facebook, Twitter and other social media provide direct links to these attacks, the people who started them and those who are sharing them
On offense or selling:
Social media environment provides an immensely rich opportunity to see company perceptions and issues at a very micro level. Where complaints and comments used to come in via mail or to a customer-care center, today it’s far more likely today that an individual with a notable experience will post it online. The CMO is interested in turning these experiences and affecting company / brand image but identify and responding to these issues proactively.
CMO’s are offered a significant opportunity to analyze the ‘micro market’ and to push-market to individuals who have identified as a potential buyer or consumer for the company. Using cookies, click through, ad tracking and other technologies, the CMO has the ability to directly touch individuals. Pushing customized pop up, side bar ads and push-email to the potential consumer come from information held by search engine and portal providers.
The promise of Big Data is to give the CMO the ability to look at huge volumes of unstructured internal and external data locating, summarizing and drawing inferences from verbs, adverbs and adjectives associated with a product name, brand, company or other named references. Marketing support groups can take direct action on what’s found or the CMO can tailor marketing plans to exploit or blunt what they learn from the communication marketplace.
CIO’s can help in these efforts by:
Identifying and brokering relationships with firms who do the ‘private investigator’ work around these social media attack issues. The environment moves so quickly with new social media sites popping up weekly that it’s impractical for CIO’s to try to manage this capability internally.
Enabling and supporting the use of big data tools and workflow technology to identify and route posts and comments about the company and brand into the care center for response.
Additionally, enabling big data capabilities to identify trends in posts and correlate that with demographic groups that give the CMO insight into trends and views of the key market segments that is more robust and real time than focus groups and surveys.
Managing the effectiveness of personalization ads, providing the information exchange with the providers and data reduction capabilities needed to evaluate the effectiveness of these micro-marketing ventures.
The work for the CIO is far different than it is for other internal customers. The focus should be on enablement and support for rapidly evolving and potentially ‘one use’ analysis. Development constructs, even Agile are not sufficient for this type of use case. In this environment, it’s not lead, follow or get out of the way, it’s all three at the same time.
The pressure to reduce Selling, General and Administrative (SG&A) costs while improving business agility has increased over the last several years. Many business leaders believe they have reduced SG&A and these costs cannot be reduced any further, yet our experience shows SG&A expenses, both as a percent of sales and in total dollars can still be significantly reduced. WGroup believes many companies still have an opportunity to further optimize and transform their SG&A spend to support growth, reduce cost and create a sustainable operating model that supports future demand and value creation.
There are several levers that can be utilized to further optimize SG&A, these include: Automation, Consolidation, Sourcing, Rationalization and Process Redesign. Some examples include:
Automation, software tools and Software-as-a-Service (SaaS) are changing the way processes are performed. These tools span functions such as Sales and Marketing, Human Resources, Finance and Information Technology.
Consolidation of redundant functions and use of shared services have the potential to streamline the services provided to the businesses.
Outsourcing support functions that can be done fairly rapidly and can provide more efficient and cost effective services for many companies.
Rationalization refers to decommissioning assets such as facilities, hardware etc. that are too dated and not fully utilized.
Reviewing, standardizing and optimizing processes can reduce cost for the function as well as make the function a lot more efficient.
SG&A initiatives are typically transformation initiatives that require both short-term as well as medium-term commitments. The transformation often requires making hard decisions that must to be made and implemented successfully through proper program management and change management programs.
One often hears the use of benchmarking in SG&A exercises. While benchmarks offer value, they merely indicate functions where costs may be high and they may point to areas of opportunity. Only a more detailed understanding of SG&A and the client’s specific business model can provide the detailed information needed to actualize the transformation. SG&A improvements and cost reduction is not a one-size fits all initiative that is implied from benchmarking comparisons.
Proper governance and program management are needed to manage the resource demand to ensure resources are applied appropriately to the various initiatives and operations. Moreover, value creation requires a continual focus on NPV, IRR or other such measures before making any capital investments and a business plan review process that is both efficient and effective.
Above all, achieving sustainability of your SG&A operations through proper alignment with business strategy, future demand management and value creation are critical. Alignment with the business strategy requires a planning process where anything that does not align with company end goals is continually identified, reviewed and addressed.
Recently, Rethink IT introduced 5 disruptive trends currently impacting IT and business leaders. One of the most unruly trends is that which signals IT’s shift in responsibility to the various business units within a company, bypassing IT in some instances. Given the shift in IT’s stronghold on technology and subsequently the transferal in budget allocations throughout an organization, how must governance change?
Before we consider this important question, we first need to define the term governance. In a broad sense of the word — governance refers to the management of IT investment. This includes hardware, software, services, people, and facilities. Additionally, governance includes the maintenance of alignment within the business practices, updating investment strategies, adhering to adopted standards and architectures, and managing the overall demand for IT services.
Traditionally, the IT governance organization consisted of the CIOs’ who acted as chairperson, senior IT leaders, representatives from finance, and potentially a few business leaders. The governance process focused on gathering requirements from the business units for new application functionality, compiling the needs associated with running the IT infrastructure, and bringing those together with the IT budget. This resulted in a prioritized spend for the upcoming planning period.
Today, with a greater proportion of the IT spend originating in the individual business units, the balance of power has noticeably shifted, and the traditional governance organization no longer has the same level of authority. Therefore, the governance model – organization and processes – are faced with both adaption and adoption.
WGroup has identified the following key suggestions for the new-generation of IT governance:
Business leaders must take a more active participatory role. It is imperative they are a part of, and actively engaged, in the governance process. They need to represent their parts of the business while also maintaining an appreciation for the priorities detailed in the enterprise’s strategic plan and make decisions in that context.
The role of the governance body becomes one of ensuring the various business entities are implementing IT solutions that are complementary, company assets (i.e. data) are protected, the investment is aligned to expected benefits, and the risks are identified and mitigated.
The CIO’s role becomes one of a key contributor of emerging technology opportunities, an arbiter of technology investment decisions, and an enforcer of standards designed to protect assets. While this paradigm may appear to mean the CIOs role has greatly diminished it, in fact, takes on new life. The CIO, in his new role, truly represents the needs and priorities of the overall enterprise. The greatest challenge is changing the personal operating mode to one of facilitation and steering the governance organization to decisions that reflect those priorities.
A well thought-out governance model serves as a foundation to manage other disruptive trends affecting IT such as increased focus on social, mobility, big data and “bring your own device”. When business leadership takes an active role and becomes exposed to the opportunities and risks these trends present, they are able to make better decisions for the enterprise and align more fully behind those decisions.