As your organization evolves over time and your use of videoconferencing (VC) increases, you can use the information in this article to help you manage and control growth.
Model for growth
This article uses a model for VC growth in an organization that moves through four stages of maturity. The stages of maturity are: initial, reactive, proactive, and preemptive. The stages are based on common maintenance models in various industries.
Determining whether your organization’s VC usage is mature
You can examine your organization's practices in several categories to determine if it has reached the mature stage of videoconferencing use. In general, a mature organization has the following practices:
-
Overall: has a commitment from IT Management, has a defined strategy with business-aligned objectives, has a plan in place for 10x and 100x growth, and knows which business problems it’s trying to solve with VC
-
Process: processes are under continuous improvement, most activities that can be automated are automated, processes enable the organization to deliver the service as agreed with customers
-
Performance: shows consistent, reliable performance, has speedy adaptation to a changing environment, organization meets and exceeds customer expectations
-
Governance: service improvements are prioritized by business value, operations are integrated with business planning activities, organization is solving the business problems it’s trying to solve
Measuring stages of maturity
The following tables contain descriptions of organizations in various stages of VC maturity, using the stages introduced in this article. Use these tables to examine your group’s status and to help plan its evolution and expansion.
Vision and managementCategory | Initial stage | Reactive stage | Proactive stage | Preemptive stage |
Management commitment |
"Bring your own" VC |
Reliance on individual technical knowledge Funding via business units Uncoordinated approach to VC Investment in innovation is largely driven by construction projects |
Objectives and targets in place Core infrastructure centrally funded VC is a clearly defined part of conferencing strategy Standards for technology and room designs are regularly published |
Strategic objectives and goals are aligned to strategic business goals Central funding of strategy is aligned to strategic business goals Development investment is driven by performance data and stakeholder management |
Defined strategy | "Bring your own" VC |
No strategic thinking Ad-hoc development No product development principles in place Larger changes driven by windfall funding from large construction projects |
Standalone VC strategy VC conceptualized, designed and delivered as a product Regular, scheduled product updates Larger changes funded through annual IT budgeting cycle |
Strategic objectives and goals aligned to strategic business goals VC conceptualized, designed and delivered as a service Continuous service improvement Strategic development is driven by performance data and stakeholder management |
Planning for 10x and 100x growth | No plan in place |
No plan in place Main blockers identified Cross-team collaboration impacted by silos Manual, repeatable processes predominate Efficiency of growth impacted by large data gaps and misalignments |
10x plan in place Many blockers removed, but systemic blockers remain Cross-team collaboration driven by goal alignment Increasing process automation Efficiency of growth impacted by coordination between multiple platforms |
Learnings from 10x execution inform 100x plan Cross-team collaboration driven by shared, business-driven goals Single management platform |
Category | Initial stage | Reactive stage | Proactive stage | Preemptive stage |
Processes under continuous improvement |
No plan in place |
Few, if any, documented procedures to check compliance against Warnings, non-compliance, and variations addressed tactically Improvements often appear ad-hoc, and lack a strategic focus |
Compliance regularly checked against documented procedures via audit Lack of service change analysis can lead to unexpected outcomes |
Improvements are actively sought, registered, prioritized and implemented, based on business value and business case Pre-release testing plus veto over service changes that haven’t been through change analysis minimize unexpected outcomes |
Process automation | Little or no automation |
Automation efforts are ad-hoc and often a by-product of larger IT initiatives Multiple, uncoordinated management platforms Little impact on SLAs, unit cost of service delivery, and speed of adaptation |
VC-specific automation is underway, such as automatic room configuration checks Multiple management platforms, broadly integrated Predominantly anecdotal improvements to SLAs, unit cost of service delivery, and speed of adaptation |
Most activities that can be automated are automated Single management platform Automation shows quantified benefits in improved SLAs, lower unit cost of service delivery, and improved speed to adapt |
Category | Initial stage | Reactive stage | Proactive stage | Preemptive stage |
Consistent, reliable performance |
Performance varies widely No agreed measures of performance Performance is not monitored |
Performance varies Basic measures of performance, such as room utilization, uptime, and usage change over time Performance is monitored, but is often broken by inter-process relationships Little meaningful comparison of VC vs other comms platforms |
Consistent performance with some variance Measurement tied into problems with business value; for example, employee productivity Service value from customer perspective is published |
Robust performance Measurement shows service impact on a broad range of specific business problems Impact of inter-process relationships and dependencies is embedded A known investment produces a known collaborative outcome |
Speed of adaptation |
Can adapt very quickly, but with little or no governance Capable of pivoting, albeit with a lack of strategic intent Struggles to scale due to a lack of agreed process or targeted outcomes |
Unlikely to adapt, pivot, or scale quickly due to a lack of coordination, a lack of processes, and a lack of clarity on how to deliver business value |
Fast adaptation, pivoting, and scaling are all possible, but a lack of service change analysis often leads to poor practice being introduced into the network and then multiplied |
Speed of adaptation is built into success criteria Research-driven findings allow an iterative approach to help determine best outcomes Ability to scale is pre-planned. Threshold measurements define when scale needs to be activated, and when it needs to slow down, pause, or stop |
Category | Initial stage | Reactive stage | Proactive stage | Preemptive stage |
Governance |
No governance No concept of service improvement |
Little or no governance Technical focus, little or no service focus Some customer feedback is captured Performance reported to internal stakeholders Ad-hoc change management, often driven by problems |
Regular governance meetings Balance of technical and service focus Customer feedback captured and acted on Performance reported to internal and external stakeholders Formal change management in place |
All activities are subject to management control and governance Business value drives service improvement Regular service reviews with customers validate continued effectiveness Continual improvement in place |
Integrated with business planning | No integration |
Annual budget bid for IT infrastructure spend Annual budget bid from some individual business lines choosing to upgrade their VC equipment and rooms Emergency funding requests via change requests, driven by performance problems |
Core infrastructure centrally funded Scheduled, co-ordinated annual budget bids from business lines following an agreed product upgrade path Emergency funding requests mainly driven by spikes in demand above pre-planned thresholds |
Centrally funded Robust performance and known outcomes lead to few, if any, emergency funding requests |