How we work
All of our projects involve the development of analytic or simulation models of the systems we are attempting to understand, design and deliver. These models are predictive — they allow us to make testable statements about the behaviour of the systems under examination. Our models are compact and auditable — third parties can examine them, integrate them into their own analyses and use the results to assure themselves that they represent the systems under test. Our toolkit, Predictive Modelling for Business Outcomes, PMBO, has been widely deployed in both the private and public sectors.
We rarely use single models to represent the systems under test. Instead, we have developed a process of 'multi-model' integration, choosing between different model representations and analytic techniques in order to answer different questions about the systems under test — for example performance under load, performance during disaster recovery, user behaviour, system correctness and systems availability.
Our modelling tools and processes are fast — very fast. This allows us to take a group of very different stakeholders, for example a marketing expert, a financial director, an operations director and an R&D engineer, and illustrate how their, often conflicting concerns interact — both positively and destructively. We do this through a workshop process that develops models in real time to illustrate conflicts and tradeoffs, allowing organisations to achieve globally rather than locally optimal solutions.
These models then become 'sand pits' — virtual systems that can be tested against changes in technology, business conditions, process changes, human behaviour and regulatory/environmental factors. Again our exploration tools can test millions of variations in key parameters in a very short time, allowing users to understand the sensitivity of their proposed (or even existing) systems to change, predicting costs, evaluating value, and even enabling project teams to plan and test deployment, management and shutdown processes.
We treat the systems we analyse as control systems. This means that different measurement regimes, their effectiveness as management tools and their costs of deployment can be tested. This not only improves the effectiveness of measurement and management regimes, but also allows governance to be placed on a solid scientific footing.
Ultimately when we deliver our analyses and recommendations, we back them up with executable models that can be tested by our partners. Although we often make use of tools that fit within a customer organisation, much of our work is done on system analytic tools developed by us. These are always made available to our customers, and while we continue to support the models, we also encourage and support customers to take ownership of the models for themselves so that they can become part of their internal processes.
To see what concinnitās can do for you read more here.