The book presents 3 characteristics for the “Ideal Team Player”. These characteristics are:
Humble – a lack of ego or concerns about status. Share credit emphasise team and define success as collectively.
Hungry – always looking for more to do, responsibilities and things to learn – a manageable and sustainable commitment to doing a good job and going above and beyond when required. Not in a selfish way.
Smart – common sense about people. Having good judgement and intuition around the subtleties of group interactions.
“Humility is not thinking less of yourself but thinking of yourself less”
C. S. Lewis
Tell me about the most important accomplishment of your career? Look for I’s and we’s
What was the biggest embarrassment in your career or biggest failure? Humble people are not afraid to tell their unflattering stories
How did you handle the failure? Look for what was learnt
What is your greatest weakness? Are candidates uncomfortable acknowledging something
How do you handle apologies, either giving or receiving them? Humble people are not afraid to say sorry or accept others with grace
Tell me about someone who is better than you in a area which matters to you? Look for a genuine appreciation of others
What is the hardest you have ever worked on something in your life? Look for joy
What do you like to do when you are not working? A long list of hobbies is a warning
Did you work hard when you were a teanager? Look for difficulty, sacrifice and hardship. A work ethic tends (not always) to start in early life
What kind of hours do you usually work? If people focus on the hours, schedule or balance then he may not be hungry.
How would you describe your personality? Smart people generally know themselves and talk about their behaviours
What do you do in your personal life which others may find annoying? Smart people know what they do and try to moderate them
What kind of people annoy you the most and how do you deal with them? Looking for self aware and self control.
Would your former colleagues describe you as an empathetic person? Does the person value empathy
Does he genuinely compliment or praise teammates without hesitation?
Does she easily admit when she makes a mistake?
Is he willing to take on lower-level work for the good of the team?
Does she gladly share credit for team accomplishments?
Does he readily acknowledge his weaknesses?
Does she offer and receive apologies graciously?
Does he do more than what is required in his own job?
Does she have passion for the mission of the team?
Does he feel a sense of personal responsibility for the overall success of the team?
Is she willing to contribute to and think about work outside of office hours?
Is he willing and eager to take on tedious and challenging tasks whenever necessary?
Does she look for opportunities to contribute outside of her area of responsibility?
Does he seem to know what teammates are feeling during meetings and interactions?
Does she show empathy to others on the team?
Does he demonstrate an interest in the lives of teammates?
Is she an attentive listener?
Is he aware of how his words and actions impact others on the team?
Is she good at adjusting her behavior and style to fit the nature of a conversation or relationship?
On a scale of 3 = Usually, 2 = Sometimes, 1 = Rarely, rate what “My teammates would say“:
I compliment or praise them without hesitation
I easily admit to my mistakes.
I am willing to take on lower-level work for the good of the team
I gladly share credit for team accomplishments.
I readily acknowledge my weaknesses.
I offer and accept apologies graciously.
I do more than what is required in my own job.
I have passion for the “mission” of the team.
I feel a sense of personal responsibility for the overall success of the team.
I am willing to contribute to and think about work outside of office hours.
I am willing to take on tedious or challenging tasks whenever necessary.
I look for opportunities to contribute outside of my area of responsibility.
I generally understand what others are feeling during meetings and conversations.
I show empathy to others on the team.
I demonstrate an interest in the lives of my teammates.
I am an attentive listener.
I am aware of how my words and actions impact others on the team.
I adjust my behavior and style to fit the nature of a conversation or relationship.
Monitoring across application and infrastructure to inform bussiness decisions
Check system health proactively
Improve process and management with work-in-progress (WIP) limits
Visualise work to monitor quality and communicate through the team
Support a generative culture
Encourage and support learning
Support and facilitate collaboration among teams
Provide resources and tools that make work meaningful
Support or embody transformative leadership
Risks are shared
Failure leads to scapegoating
Failure leads to justice
Failure leads to inquiry
Novelty leads to problems
Servant leaders focus on their followers’ development and performance, whereas transformational leaders focus on getting followers to identify with the organisation and engage in support of organisational objectives.
People tend to equate the quality of a decision with the quality of the result. However in the real world it is not possible to make perfect decisions because some of the information is hidden – as such real world decisions are more like games of poker rather than chess.
As such we are very bad at separating luck and skill as bad decisions can still produce good outcomes. In poker the feedback loop is short, in the real world this is much longer making the evaluation of decisions even harder.
We suffer from Hindsight bias – believing we could have predicted something at the time. Alternative we attribute all of our successes to our skill and all our failures to bad luck – in reality neither are fully the case. There is a lot of space between being unequivocally “right” or “wrong” which we tend to vastly simplify. Offloading the losses to luck and onboarding the wins to skill means we persist our approach without learning. This is because our Ego need for a positive self image and losing feels twice as bad as winning.
How our beliefs are formed:
We hear something we believe is plausible
We then believe it as a true
Sometimes at some point later, if we have the time and inclination, we think about it and vet it to determine whether it is, in fact, true or false
Instead of altering our beliefs to fit new information, we do the opposite, altering our interpretation of that information to fit our beliefs. This prevents us from learning.
There is a big difference between clocking up experience experience and becoming an expert.
In reviewing decisions the result does not matter as this is influenced by luck which is out of your control.
How can we overcome our deficiencies?
Putting a percentage on our statements – this helps us realise that things are not fully true or false and in calculating the percentage we evaluate our beliefs.
Instead of fixating on an outcome, think of a set of future outcomes.
Better evaluate decisions
Communism – data belonging to the group, data which we have an urge to leave out is exactly the data we must share
Universalism – universal standard no matter the source of data
Disinterestedness – vigilance against things which could influence a groups decision
Organised Skepticism – discussions to encourage engagement and dissent
Building a decision support group
Visualising our future self or how will I feel about the choice in 10 min, 10 months, 10 years
Run premortems to evaluate both sides of the problem
In a Hierarchy Every Employee Tends to Raise to Their Level of Incompetence
Each role requires a different set of skills, as such people using the skills which made them successful in a different role will either not help them in the new role or will hold them back.
A second manifestation is where subordinates of the person in the incompetent role tightly follow the rules, because deviation from them will act against them personally. They are incentivised to be strict to the rules.
Organisations work around it by “promoting” people or moving people sideways into roles which they can do no harm.
If people are super-incompetent then they are easily let go. Ironically this also applies to the super-competent. These people challenge the hierarchy and are let go to preserve the current order.
Work is accomplished by those employees who have not yet reached their level of incompetence
There is no direct relationship between the size of the staff and the amount of useful work done.
Nothing fails like success
Psychological profiling can place employees in roles which they are most suitable for. This means that any promotion will be to an area of less competence.
Good followers do not become good leaders
There is a compulsive desire to get to the level at which you are not competent, as the jobs which are easy for you to perform well offer no challenge. As such people push themselves into the roles which they can not do. The challenge is more to stay one level below the level of incompetence.
Incompetence can be classified into four types:
This is not just a workplace phenomenon, political candidates are chosen to will elections rather than because of their law making skills.
The Principle of Quantified Overall Economics – selecting action based on the economics. e.g starting manufacturing (e.g where it costs us 10x to correct a problem) can be evaluated against the duration if would take in development to identify things which quickly come to light in manufacturing (e.g. 5 weeks in the lab vs 1 week in manufacturing).
The Principle of Interconnected Variables – in an interconnected system one variable impacts another e.g. product cost might impact product value and development expense. It is important that all of these can be measured in the same unit: life-cycle profit impact.
The Principle of Quantified Cost of Delay – Cost Of Delay (COD) opens the economic door to being able to evaluate options and tradeoffs to improve decision making.
The Principle of Economic Value Added – This is the difference in the price an economically rational buyer would pay for the completed work.
The Inactivity Principle – Although inactivity is a visible form of waste the greatest source of waste is product sitting idle in process queues.
The U-Curve Principle – In multivariable problems the likely result is a U-Curve which tend to have flat bottoms meaning we need to have all our information in the same format but it does not have to be highly accurate to be close enough.
The Imperfection Principle – Even imperfect models improve decision making.
The Principle of Small Decisions – The cumulative effect of multiple small decisions can be massive. As such putting effort into these will produce large benefits.
The Principle of Continuous Economic Trade-offs – Having a plan and sticking to it means that new information gathered is not considered, this can mean that products which economically made sense but now don’t are still delivered because the new information is not included in the economic mode.
The First Perishability Principle – Economic choices need to be made quickly, the availability of such information is available at the lowest levels. As such these low levels should be able to make these decisions
The Subdivision Principle – a bad option can be subdivided into it’s constituent economic parts and a good option might be buried inside.
The Principle of Early Harvesting – provide ways for early improvements to be executed quickly.
The First Decision Rule Principle – Push decision making to the lowest level where most of the small decisions are made. Create economic decision rules to align economic choices, ensure they are optimal at the system level, push down control to the lowest level with limited risk and streamline decision making.
The First Market Principle – Instead of having a top down allocation of resource which tends to go to the best lobbyists. Instead create an internal market where projects can purchase more or faster service. e.g. a CAD service which usually provides work in 1 week could offer projects a 1 day service for a premium, if the 1 day service is popular then this provides funding to cover such service.
The Principle of Optimum Decision Timing – Market and technical uncertainty decrease over time, a model for the cost of a decision and value created by waiting can indicate when the optimal time to make a decision is.
The Principle of Marginal Economics – To calculate the optimal investment in product features compared to the extra value this additional work provides. e.g. do we really need to continue working to deliver the last 5%?
The Sunk Cost Principle – Decisions should be made on marginal economics, not on sunk cost. If there is not marginal improvements or if other products offer higher returns on remaining investment then investment should be stopped, no matter how much has already been invested.
The Principle of Buying Information – information reduces uncertainty and thus improves economic value. Where the cost of information is less than the economic value provided it is a good investment, noting that the economic value is not constant during a project.
The Insurance Principle – Don’t pay more for set-based concurrent engineering, parallel efforts to reduce risks by producing multiple solutions, than the cost of failure.
The Newsboy Principle – The balance of failure and success. If a successful product produces £1m and costs $0.1m then a success rate of 1 in 10 breaks even.
The Show Me the Money Principle – Speaking the language of money influences financial decision makers.
Queueing Principles – periods of inactivity in queues impacts the economics of delivery
The Principle of Invisible Inventory – Physically and financially invisible work as a result of partial investment (e.g. design or feasibility) is difficult to see.
The Principle of Queueing Waste – Product development queues tend to be large because of the long time for products to flow through the pipeline. Queues create economic waste in the form of longer cycle time, increased risk (others may come to market earlier), more variability (because of less slack time), more overhead (management, tracking and reporting), lower quality (because of slower feedback) and less motivation (slower to see our work completed).
The Principle of Queueing Capacity Utilization – The higher the levels of utilisation the higher the queue sizes will be.
The Principle of High-Queue States – At a given utilisation the probability that there are n+1 items in a queue is lower then there being n items, however the cost is the total of all delayed work so this increases with the queue size.
The Principle of Queueing Variability – queue variability is caused by arrival rate variability and processing variability. Removing the queue would only be possible is these were both static, however this is highly unlikely and reducing one of the two factors would, at most, halve the queue size not remove it.
The Principle of Variability Amplification – Given the queue size grows exponentially with increases in percentage capacity utilisation – as such the same variance at lower utilisations will have much smaller variation in queue size compared to at higher utilisations.
The Principle of Queueing Structure – There is a difference between having multiple servers each with their own queue – where a single job can slow that queue; having a shared queue for multiple servers – which reduces variability; having a single high capacity server but this still struggles with variability.
The Principle of Linked Queues – The arrival rate or process rate becomes the output of one system into the queue for the next (depending on the utilisation). Variability in the upstream system can impact the throughput downstream.
The Principle of Queue Size Optimization – Total cost is cost of capacity plus cost of delay. An optimal capacity is balancing delay and capacity costs.
The Principle of Queueing Discipline – The aim is to reduce the economic impact of the queue not the queue itself. Choosing the higher value work can improve the economic output.
The Cumulative Flow Principle – A way to visualise the system to identify queues and trends
Little’s Formula – Wait time = queue size / processing rate.
The First Queue Size Control Principle – Capacity utilisation is tough instead adapt to queue size, when it increases increase the ability to process it thus lowering utilisation.
The Second Queue Size Control Principle – Cycle time is a lagging indicator, again focussing on queue size will improve cycle time.
The Diffusion Principle – With random arrival and processing rates statistically the queue size distribution will broaden over time.
The Intervention Principle – We can not rely on randomness to correct issues randomness creates. As such we should intervene early to prevent things worsening.
Variability Principles – the economic cost of variability is more important than the amount of variability. In manufacturing reducing variability improves the economics – however this is not the case for product development which is not simply repetitive.
The Principle of Beneficial Variability – The expected monetary value is the probability of success times the net benefit.
The Principle of Asymmetric Payoffs – Where the payoffs can be greater than the costs it tends to be worth investigating higher variability candidates first as the payoff (if successful) will likely be greater.
The Principle of Optimum Variability – Less or more variability is not the air, optimal variability is where risk and reward are balanced.
The Principle of Optimum Failure Rate – Depending on the aim the failure rate will be different – if exploration testing a 50% success rate is the target to maximise information learning, for validation the target percentage will be 100%.
The Principle of Variability Pooling – If we pool together independent random variables then the pool experiences less volatility in its entirety. E.g. a share fund reduces the impact of the volatility of individual shares.
The Principle of Short-Term Forecasting – Inaccuracies in forecasts grow exponentially, as such a 2 year horizon is 10 times harder than 1 year. The smaller the scope, the shorter the planning horizon, the lower the risk so simple approval.
The Principle of Small Experiments – Sub-dividing a risk into a sequence of smaller risks so we can learn more iteratively increases the chance of success.
The Repetition Principle – By doing lots of small things this incentivises us to automate repetitive parts which improves reliability and improves quality.
The Reuse Principle – Where economically sensible we should reuse designs. The key is economically, reuse per say is not the end only when it is valuable.
The Principle of Negative Covariance – The ability to provide a negative counterbalancing effect, such as cross training such that if there is an unexpected rise in demand then this can be counterbalanced by increased support.
The Buffer Principle – Provides a margin of error for delivery to clients.
The Principle of Variability Consequence – It is not just about reducing the frequency of defects but about reducing the consequences – this reduces the cost of variability. e.g. better to stop review an issue with a screw before using the whole batch of faulty screws and create greater waste later. Fast feedback loops.
The Nonlinearity Principle – In some range the system operates linearly – outside of this the change is rapid. E.g. a boat can sway widely to a point then capsizes.
The Principle of Variability Substitution – Buying cheap variability will improve expensive variability. E.g. paying to expedite parts to stabilise the schedule
The Principle of Iteration Speed – Halving the cycle time will have a quicker reduction in overall errors than reducing the defect rate.
The Principle of Variability Displacement – Not all queues are of equal cost e.g. a plan circling to land is more expensive than slowing one down mid flight or delaying take off.
Batch Size Principles – a topic which is typically ignored but with huge potential value
The Batch Size Queueing Principle – Reducing batch size reduces cycle time as there is less work in flight.
The Batch Size Variability Principle – Smaller batch sizes produce a smoother flow and result in smaller and some times even eliminates queues.
The Batch Size Feedback Principle – Smaller batches result in faster feedback.
The Batch Size Risk Principle – Less work in progress, smaller experiments and accelerated feedback means less risk.
The Batch Size Overhead Principle – Less work in progress mean less overhead to deal with them e.g. is bug 301 the same as any of the 300? What if this were 11?
The Batch Size Efficiency Principle – Large batches might provide local optimisation but not at the system leve.
The Psychology Principle of Batch Size – Small batches improve accountability. Feedback is slow with large batches so the work is not very rewarding.
The Batch Size Slippage Principle – Larger projects/batches tend to gain bigger delays
The Batch Size Death Spiral Principle – Large projects can be too big to fail and suffer from huge amounts of scope creep.
The Least Common Denominator Principle of Batch Size – If one element in the batch is safety critical then the whole batch will be treated as if all elements are safety critical which removes flexibility and increases workload.
The Principle of Batch Size Economics – The optimal batch size is a U curve function so we can make incremental improvements. Batch size changes are reversible. U curves are forgiving so there is space for experimentation.
The Principle Principle of Low Transaction Cost – Each batch has a transaction cost. Efforts put into reducing this transaction cost (e.g. die stamping from a 24hr change over to a 10 minute one) lowers the overall costs.
The Principle of Batch Size Diseconomies – Transaction and holding costs are difficult to fully model. Given transaction costs are more likely than expect to be reducible with smaller batches will likely produce better than expected results.
The Batch Size Packing Principle – Small batches help us get better resource utilisation, even in an environment with both large and small batches.
The Fluidity Principle – By reducing interdependence we no longer need to follow a strict sequence. This gives more flexibility and opportunities for reuse.
The Principle of Transport Batches – There are two types of batches – production batches and transportation batches. Each have their own optimisation. For production batches the size dictated by the setup time. For transport batches this is dictated by the fixed cost associated with transportation. Transport batches tend to be more important than production batches.
The Proximity Principle – Transport batch size tends to be a function of distance, so to reduce batch size co-location with the rest of the product development is key.
The Run Length Principle – Smaller production batches increase feedback and can also provide the ability to interleave easier and harder jobs.
The Infrastructure Principle – Investment in infrastructure is key to support smaller batches each at different stages of development.
The Principle of Batch Content – Sequence activities to create maximum value for minimum cost. Removing risk is a key way to increase the expected value.
The Batch Size First Principle – Reducing batch size is more effective than adding capacity to bottlenecks.
The Principle of Dynamic Batch Size – Batch size does not need to be constant, at different phases of the project this might change. E.g. the start when there are more unknowns a smaller batch size might be advantageous.
WIP Constraint Principles – Starting things adds no value, only finishing does
The Principle of WIP Constraints – Positive – WIP reduces average cycle time. Negative – Permanently reject potentially valuable demand which reduces capacity utilisation. Limiting WIP to twice the average produces a 28% improvement in cycle time with only a 1% reduction in utilisation.
The Principle of Rate-Matching – Matching the WIP of adjacent processes prevents the build up of queues
The Principle of Global Constraints – Theory of Constraints (TOC) matching the WIP to the throughput of the bottleneck. This is for stable systems without variable bottlenecks. In reality where there is volatility this can prove too simple.
The Principle of Local Constraints – The Kanban system flows the contratint by limiting work until it is pulled. This take into account volatility in the system.
The Batch Size Decoupling Principle – Using a WIP range decouples and allows optimal batch size e.g. one system optimal at 6 but another at 2.
The Principle of Demand Blocking – When the WIP is reached there are two options – to reject extra demand or to hold it in a low cost queue.
The Principle of WIP Purging – When working at high queue levels the economics change. Jobs should be reviews to see if they are still optimal economically. Many companies don’t like to kill projects they have invested in, instead they starve them – kill zombie projects to preserve flow.
The Principle of Flexible Requirements – Cutting scope can have a big impact on utilisation.
The Principle of Resource Pulling – Quickly apply extra resource to an emerging queue. Even small additions can make large improvements.
The Principle of Part-Time Resources – These are very valuable when they can be redeployed very quickly and can surge to full time temporarily.
The Big Gun Principle – The best and the brightest minds to solve problems. The issue is that these people tend to be overloaded so can’t respond to emerging issues quickly. This is why it is key to build have slack time for these people.
The Principle of T-Shaped Resources – This provides both specialist knowledge and flexibility. We need to grow such people by investing and providing opportunities.
The Principle of Skill Overlap – Providing training on adjacent processes so that people can provide extra support when needed.
The Mix Change Principle – In product development different tasks might e.g. take different amount of time for product to define what the users need vs the amount of engineering time. When engineering is working at capacity product can take on work where there might be significantly more product work than engineering to balance the two sets of work.
The Aging Principle – Items which have taken longer tend to have bigger problems, so seeing the age of projects can highlight areas of challenge.
The Escalation Principle – Plan how you will escalate problems so that the process is clear and the issue can be resolved quickly.
The Principle of Progressive Throttling – Reducing the rate as we are approaching the WIP limit to prevent having to take more severe action.
The Principle of Differential Service – Separate into different streams with independent WIPs and capacity allocations to differentiate service.
The Principle of Adaptive WIP Constraints – When flow is good then increasing WIP can be good as well as reducing it when things need to slow down.
The Expansion Control Principle – Some tasks expand, there are two blocking approaches firstly to limit consecutive execution time breaking to allow another job to be processed before returning. Second to decide when the continued investment is no longer worth the return.
The Principle of the Critical Queue – Adjusting the WIP so that the expensive queues are kept minimal.
The Cumulative Reduction Principle – Ensuring the departure rate is greater than the arrival rate will reduce the WIP.
The Principle of Visual WIP – Make WIP visible, it is easier to focus on what you can see.
Flow Control Principles – high throughput operating, managing variance and economicly optimal
The Principle of Congestion Collapse – As a systems utilisation increases at a point congestion can kick in and the throughput can collapse.
The Peak Throughput Principle – The system throughput flattens when getting to the point of collapse, as such operating at a lower level does not significantly reduce throughput. This is achieved by limiting occupancy aka WIP.
The Principle of Visible Congestion – Limiting WIP and forecasting duration allow for informed decision making.
The Principle of Congestion Pricing – Using pricing to smooth demand
The Principle of Periodic Resynchronization – What is optimal for a subsystem is not always optimal for the system in it’s entirety. When systems get out of synch they need to be resynchronized – not just to within acceptable bounds but its center.
The Cadence Capacity Margin Principle – If we wish to meet a regular launch schedule we need enough capacity to absorb delays at intermediate milestones.
The Cadence Reliability Principle – If you don’t know when a launch will happen you well fight to get it into the next one. Instead make them regular so people can work around them.
The Cadence Batch Size Enabling Principle – With a regular cadence the overhead is reduced. Increasing the cadence shrinks the batch side.
The Principle of Cadence Meetings – A regular cadence reduces admin overhead. Most people are at high utilisation so having ad-hoc meetings is not as easy and responses slower than regular cadence meetings.
The Synchronization Capacity Margin Principle – The multiple items which need to be synchronised need to be present at the same time this margin provides a buffer so any variation in arrival can be overcome.
The Principle of Multi Project Synchronization – Where there are batch economies these items don’t need to all be for the same project. Pulling together work which could improve multiple projects can have economic benefits.
The Principle of Cross-Functional Synchronization – Instead of reviews being needed by multiple independent functions, instead bring everyone together.
The Synchronization Queueing Principle – Synchronizing batch size and timing between adjacent processes can reduce inventory.
The Harmonic Principle – If cadencies are harmonic then they can be more intune – e.g. daily, weekly, 4 weekly etc.
The Shortest Job First (SJF) Scheduling Principle – When all jobs have the same delay cost the prefered schedule is the shortest job first.
The High Delay Cost First (HDCF) Scheduling Principle – Where the durations are the same but cost of delay is different using the high delay cost first is optimal.
The Weighted Shortest Job First (WSJF) Scheduling Principles – Priority is based on the Cost Of Delay divided by Duration
The Local Priority Principle – Priorities are local, the scheduling will be local to the queue not global to a project.
The Round-Robin Principle – Where the duration of a task is unknown completing parts of them in sequence can be beneficial, however the time slice needs to ensure that there is not constant switching as such 80% should complete within the time slice and as such only 20% should take more than one slice.
The Preemption Principle – Preempt the current task (aka jump immediately to a new task) results in a switching cost. As such only preempt when the switching cost is low else prioritisation should remain within the queue.
The Principle of Work Matching – In product work we need to match tasks to people with the right skill set, to improve this matching we need visibility of upcoming work as well as resource availability times.
The Principle of Tailored Routing – Each product should follow it’s own flow, going through the steps which add value and skipping the ones which do not.
The Principle of Flexible Routing – Taking into account the current network conditions. Selecting the lowest cost path. This routing needs to happen with short time horizons.
The Principle of Alternate Routes – We should have backup routes through critical points, though likely at greater cost. When the optimal route is under utilised it is still wise to trickle some through the alternative routes to ensure they are functioning.
The Principle of Flexible Resources – This allows for flexible routing.
The Principle of Late Binding – Late decision making allows us to have the most information and pick the most appropriate option at that point in time.
The Principle of Local Transparency – Local visibility of the upcoming work can make matching easier e.g. through the use of whiteboards.
The Principle of Preplanned Flexibility – If flexibility is required then investing in it is required or planning ways around it. Doing drills will test for such flexibility.
The Principle of Resource Centralization – Neither centralised or decentralised is correct, it is a balance as both have different advantages.
The Principle of Flow Conditioning – A smoother flow produces a better throughput, especially at the bottleneck so focus on upstream points smooths the arrival rate.
Fast Feedback Principles – in product development there are opportunities not just to prevent losses but to improve outcomes.
The Principle of Maximum Economic Influence – Focus on unit costs not project costs as these have more impact on the profitability.
The Principle of Efficient Control – Not all controls can be efficiently influenced. Identify and focus on the ones you can effectively influence.
The Principle of Leading Indicators – Leading indicators allow the opportunity for resolution of issues, where as lagging indicators only report after the fact.
The Principle of Balanced Set Points – Set limits on economic variables to raise awareness of issues in each variable.
The Moving Target Principle – The economic optimum is constantly changing, as such we should be like a heat seeking missile constantly adapting to get closer.
The Exploitation Principle – The target is not to follow the plan, the target is to produce the most economic benefit. As such we should exploit opportunities when we come across them not just ignore them sticking to the plan.
The Queue Reduction Principle of Feedback – With fast feedback the duration between cause and effect is reduced which reduces variance, then lowers inventory resulting in faster flow so lower WIP.
The Fast-Learning Principle – Faster feedback results in better learning, to achieve this requires investment into generating the feedback.
The Principle of Useless Measurement – A metric is only part of a control system, measuring metrics per say does not generate any change.
The First Agility Principle – Smaller projects can change direction quicker with a smaller “force”. Megaprojects, once going are very hard to change direction.
The Batch Size Principle of Feedback – Smaller batches complete quicker so get feedback quicker.
The Signal to Noise Principle – As batch size shrinks the variability of each batch (noise) increases. Efforts should be taken to reduce external source of noise.
The Second Decision Rule Principle – Provide the economic model so that people closest to the problem can make the right decisions.
The Locality Principle of Feedback – Local feedback loops are shorter and can reduce volatility.
The Relief Valve Principle – Identify a metric for a relief valve, if this level is achieved then look to release the pressure sufficiently such that the workload is at the centre, not still hovering around the limit still.
The Principle of Multiple Control Loops – Using a mixture of short loops and longer loops can counteract shorter and longer term issues in a timely fashion.
The Principle of Controlled Excursions – Within a control range performance is predictable – it is key to keep within this control range else starts running away.
The Feedforward Principle – When a higher arrival rate is expected feeding this information in advance can help prepare by reducing the existing queue.
The Principle of Colocation – Face to face communication increases the speed of feedback and distribution of knowledge.
The Empowerment Principle of Feedback – Fast Feedback gives people control, even if they had it before if they don’t see the impact quickly they don’t feel in control.
The Hurry-Up-and-Wait Principle – It is hard to create urgency if the work we had to rush then just sits in another queue. Short queues means that work is done quicker and produces a general sense of urgency.
The Amplification Principle – Given the choice between working on one severely late project or one which is a little late people will chose the little late as they would prefer to deliver one thing than nothing. This amplifies the issue on the severely late project.
The Principle of Overlapping Measurement – Juggling measures to align people to the goal between personal, department and organisational.
The Attention Principle – If it is important we need to give it attention, so people see that it is important.
Decentralization Principles – The level where the most information is available quickest.
The Second Perishability Principle – Certain problems and opportunities are perishable. People do not need to request to put out fires, similar for perishable problems and opportunities people should have sufficient freedom to act.
The Scale Principle – Centralise the problems which are not perishable infrequent, large or have significant economies of scale.
The Principle of Layered Control – Having a good escalation process means that issues can get the focus before they balloon into much bigger issues.
The Opportunistic Principle – An original plan should be used for alignment, not conformity. After projects start the opportunities and challenges realised should be quickly used to adapt or even cancel the project.
The Principle of Virtual Centralization – Having a team which can come together to tackle large challenges, but day to day when such challenges do not exist they do other non-central work.
The Efficiency Principle – Efficiency should not always trump response time – the economics should be taken into account as a quick response can be more valuable.
The Principle of Alignment – There is more value created by overall alignment to the goal rather than local optimisation e.g. improving 10 features by 1% will make little impact if the aligned goal could be better achieved by 10% on 1 feature.
The Principle of Mission – The end state or goal should be clear to everyone, with as much flexibility on the how it is achieved.
The Principle of Boundaries – Having clear roles and responsibilities so that decisions can be made quickly. Also ensure there are no gaps in between. Using the principle “If you are going to worry about it, I won’t”
The Main Effort Principle – Identification of the main effort to focus time and energy to maximise economic returns.
The Principle of Dynamic Alignment – As we progress and learn our economic model must keep pace. The evolution of this model may results in changing priority.
The Second Agility Principle – When a change of priority happens we must adapt quickly to take advantage. This should be factored in to the process.
The Principle of Peer-Level Coordination – Peer to peer communication can move things much quicker than a project management office.
The Principle of Flexible Plans – Planning is useful to think things through and alone. Building modular plans allows these modules can be changes as things evolve.
The Principle of Tactical Reserves – Each layer should have reserves which they can apply to the problems escalated to them with the aim to resolve the issue.
The Principle of Early Contact – Getting closer to the user and the problem early is much more preferable so that learning can be maximised.
The Principle of Decentralized Information – For effective decisions to be made at the lowest level everyone must have access to the information.
The Frequency Response Principle – There is a maximum rate at which we can respond, to improve agility the rate needs to be increased and this can be done with fewer people being involved.
The Quality of Service Principle – If response time is key then measure it, to achieve the aims this might mean that the team is over resourced but this makes economic sense to keep a low response time if the delays cost more.
The Second Market Principle – Allow market forces to aid in prioritisation. These could be money or limited tokens per project.
The Principle of Regenerative Initiative – Making bad decisions quickly can have benefits over better decisions made slowly delaying work.
The Principle of Face-to-Face Communication – Verbal ideally face to face communication produces much quicker feedback than email.
The Trust Principle – Small batches build trust and trust enables decentralised control which enables small batches.
There tends to be the perspective that performing well at one level means they will likely perform well at the next. In reality each level in an organisation has a different set of challenges which required different values, time application and skills. The book presents 6 passages.
Work value – what people believe is important and thus the focus of their effort
Time applications – new time frames that governs work
Skill requirements – the capabilities required to execute new responsibilities
In general a change in work values will then cascade to time applications and skill requirements.
High-quality technica or profesional work
Accepting the companies values
Completing work assigned within given time frame at required quality – usualy short term
Individual contribution through technical or profesional perficiency
Relationsjips built for personal benefit
Using company tools, processes and precedures
Managing Self to Managing Others
Must shift from “doing” work to getting work done through others
Success of direct reports
Valuing management work
Success of unit
Balancing own tasks but also helping others work effectively
Coaching and feedback
Measuring others work
At GE they plotted values against results – if a promotion does not result in a change in value then the person no longer remained in a leadership position.
Managing Others to Managing Managers
Link strategy to workers and workers capabilities to match the strategy
Divested of individual tasks, pure management focus.
Selecting people to passage to Managing Others, assigning managerial and leadership work, measuring progress and coaching them.
Holding first line managers accountable
Deploying resources effectively
Manage the boundaries between teams
Thinking beyond the team.
Returning Managers of Others to individual contributors
Signs of misplaced managers
Poor performance management
Failure to build a strong team
Single-Minded focus on doing work
Choosing clones over contributors
Managing Managers toFunction Managers
Learn to value “foreign” work.
A long-term perspective and strategy.
“Can we do it?”
Value what you don’t know
Participating in bussiness meetings and working with other functional managers.
Valuing time focusing on the state of the art and longer term.
Collaboration with other functions
Acting more as a leader than manager
Communication skills for working with people further away. Must manage areas outside of their own expertise.
Understanding how front line people are doing without alienating inbetween managers
Must take other functions concerns into consideration collaborating with them.
Managing competition for resources for bussiness needs.
Becoming proficient strategists.
Function Managers to Business Manager
Values the success of their own bussiness
Valuing all functions e.g. staff functions
“Should we do it?”
Time needs to be reserved for reflection and analysis.
Clear link between their efforts and market results.
In charge of integrating functions not just being aware of them.
Whereas before the question was how to do something, now the question is should we do something and what value will it bring to the bussiness.
Balancing the short term goals and future needs meeting quarterly profits, market share, product and people while also planning for 3 – 5 years ahead.
Sign of a miss placed manager
Inability to assemble a strong team
Failing to grasp how the bussiness can make money
Problems with time management
Neglect of the soft issues
Business Manager to Group Manager
Values the success of other managers businesses, no longer getting credit.
Which choices will give us the best results now and in the future
Coaching bussiness managers – not managing their bussiness for them
Evaluating strategy for capital and deployment
Portfolio strategy – the right businesses
Core capabilities assessment
Group Manager to Enterprise Manager
Making a small number of mission critical priorities per year with focus
Growing people for the right roles using the right assignments to give them the required experience
Dealing with highly ambitious direct reports knowing they want your job
In most cases, failure does not occur because people are lazy or incompetent; it occurs because they really don’t understand their role when they move to new leadership levels.
When moving from one leadership level to the next, if there is not a change of approach then it will lead to failure.
There are subtle but meaningful differences in the requirements for success at different leadership levels.
The book starts off with some stories highlighting the importance of diverse knowledge and experience. The book highlights that no only do homogeneous groups underperform but they underperform in predictable ways – not only do homogeneous groups share the same blind spots but they reinforce them as such groups become more sure of their judgements.
One of the challenge with diverse groups is that discussions in them are cognitively demanding – including plenty of debates and disagreements as a result of different perspectives being aired. Such groups (in the experiment) typically came to the right result but they were not certain of it because the discussion highlighted the inherent complexity. Compared to a homogenous group which was more likely to be wrong because of mirroring behaviour and not challenging blind spots, as a result they were also more sure they were correct.
When we have complex (as opposed to simple) problems e,g, economic forecasts then there is no single model which takes everything into account as such using multiple models by different people produces a better result. The reason for this is that each forecaster has their own frame of reference and builds a model to reflect that including its blind spots, however different economists (as an example) have different worlds views so by combining them the results can be more inclusive. In this world having clones of the best economists in the world would not produce the benefit which diversity brings.
In uncertain times people are naturally attracted to people who are dominant leaders. The ironic part is that at exactly these times is when the diversity of opinions is of it’s most importance.
Immigrants are more likely to be entrepreneurs – because they see beyond the status quo and can envisage new products and ways of working which people who are incumbent can not.
Echo chambers – the interesting part is that their opinions tend to become stronger when exposed to opposing ideas. The reason is that when they feel under attack they find holes in the other persons arguments which confirms their position. As such they hear but dismiss outside voices which they don’t trust, they trust their opinion and those who agree with them.
When people are in small diverse communities there is less opportunity for echo chambers to form, the challenge is in large environments (e.g. the internet) it is very easy to find people who agree with your perspective and to make a reinforcing group.
The one simple thing which everyone should take away from this book is – you have a bussiness plan, in it there are assumptions – run experiments to test each of the assumptions. With this you can then see if you should pivot the strategy for your vision if people don’t react as you were expecting, however first you must launch early and learn. Then you can optimise your product using experiments to refine.
A startup should be properly understood as “a human institution designed to create a new product or service under extreme uncertainty”
To see if things are going the right way don’t just use number of people vs time – these up and to the right graphs are misleading/vanity metrics. Instead use cohort analysis where you see of the 100% of people who visited your site, the percentage that signed up, the percentage that signed up and used the product etc – if you are making changes and these percentages stay the same then things are not improving. To do this needs small batches and speedy feedback loops.
Experiments must be : Actionable, demonstrating clear cause and effect, accessible, aka understandable, and auditable, to be able to go and actually validate the accuracy of the numbers so we have trust in them.
There are two engines for a bussiness : growth, acquiring customers, and revenue, earning money from customers.
If things are not improving and the current state is inevitable then a pivot is needed, there are various types of pivots:
Zoom in – focusing on a sub part of the product
Zoom out – where the current product becomes a feature of a bigger product
Customer segment – where your customers turn out to be not the ones you expected
Customer need – by getting to know our customer we find a greater need for them
Platform – turning from a product to a platform
Business architecture – e.g. from B2B to B2C
Value capture – a change in monetisation or revenue models
Engine of growth – viral (word of mouth), sticky (returning) or paid (adverts)
Channel – how the content is delivered e.g. DVD or internet
Technology – using a different technical solution to solve the same problem