Mentoring programs are a fantastic and incredibly efficient way of upskilling your staff and frankly you are crazy if you don’t have one.
I’m not talking about some massively heavy top down HR mandated process but an organic, bottom up, staff led process. Making it bottom up means that you hear from your front line staff on what they actually want to get out of the program rather than what you think they want.
Having been both a mentor and a mentee, I can say that both get a huge amount out of the process. Whether it’s insights, guidance, network or general development for mentees to people management, organisational insights, leadership and communication skills for mentors, it’s a win-win for everyone.
Now I’m not going to tell you how to do it as it will vary from organisation to organisation but as long as you have some basic protections/escalations for all parties, it’s pretty hard to get wrong so get out there and start your program.
Last month I spoke at the CIO Edge Conference in Sydney. The topic was on innovation in the digital finance space and how we can stay on top of the relentlessly increasing pace of innovation.
One of the things I discussed was using Wardley maps. While not something that deals directly with disruption and new business models, Wardley maps are incredibly useful tools for any business leveraging digital business channels. Originally published as an article in CIO magazine, they describe how the value chain of a system could be mapped against the maturity level of the individual components. Not only do they give a great tool to assess your Digital capabilities against your competitors but they also give you great insight into what components in your system need to be upgraded, either commoditised or customised, to ensure that your Digital innovation efforts are efficient and effective. It’s far too common to see companies focus on the surface layer components and activities without ensuring that the fundamental dependencies are in the correct state to support the customer visible capabilities. Fans of Eli Goldratt’s Theory of Constraints will immediately see their utility.
So much happened in such a short space of time that I’m afraid I’ve neglected this blog. I’m going to be making a concerted effort to publish more content in 2016 and turn at least some of the hat full of drafts and half written pieces on a variety of topics into proper posts.
The management magazine strategy+business have just published a short video on this topic with Havard Psychology Professor Ellen Langer
Having watched it, I think that some of what she says is correct and has real value. The real benefit of meditation in this regard is the clarity and focus that comes after it rather than the act of meditation. That everybody benefits from being more in the moment rather than coasting through life on autopilot. That the act of mindfulness increases attention and focus and leads to better interactions and results in many cases (from a personal point of view at work it’s one of the things I ask for in meetings, even if it’s just by putting away laptops and phones). All good stuff.
But I really hope that some unfortunate editing cut out some deeper explanations of her comments as there are some glaring mistakes that make me wonder how much time Professor Langer spends in the business world.
Her statement that you should always be mindful does not take into account the mental and emotional cost of “always being on”. If you’ve ever have the chance to see a capable C-level executive at a major function you will notice how exhausting it is being in the moment to every person they speak to. It’s very much like an actor being on stage except the spotlight is always on you. Typical strategies I’ve seen used by CXO’s are to either make their involvement very short and controlled or to have a small support group that they dip in and out with when they need to get re-energised. We need to be aware that mental resources are not an inexhaustible resource and one that we need to husband throughout the day. Prof Langer does mention that “you cannot be actively thinking about everything” but I read that as referring to the depth of your mindfulness not to the duration.
-note to self, I regard the work day as neither a sprint nor a marathon but a series of sprints. One of the tricks of using mindfulness comes from recognising when one needs to be sprinting and fully engaged and when one can back off the accelerator, defocus slightly and recoup energy.
Late in the video, her assertion “there’s no more information than there ever was” is either blindly simplistic or just plain wrong. The rise of the machines has meant that we have a far greater insight into the workings of our business as thing’s a easier to measure and monitor. Barely a day goes by without someone proposing a new metric and it becomes a critical task of a leader to decide which metrics need to be paid attention to and which should either be ignored, killed off or delegated to someone else. The only way that her statement could be true is if you redefined information to its most abstract platonic ideal. Yes, the information has always existed in-potentia but now we have direct access to a firehose of real actual information, far more than we had before.
And how about her sign-off for a weapons-grade “thought leader” bullshit aphorism.
“Rather than spending so much time worrying about making the right decision, I think we should spend more time making the decision right“.
I know that we play a game with soundbites, the media love them for their ability to make easy headlines from them and we the viewer love them for their ability to distil a proposal down to its essence. Unfortunately, what Professor Langer has said is a Deepity of epic proportions. It’s trivially obvious in that of course we want to make the right decisions (duh!) but it’s totally false to infer that we don’t make correct decisions without sufficient worrying about both how we make them and the data that we are basing our decision on. Even if we are generous and focus on the use of “so much time worrying”, then it is still wrong – the only way we can test that a decision is correct before we make it is by running it though thought experiments to see what the consequences of that decision are. The greater number and varieties of these virtual scenarios then the greater confidence we will that it is correct and it follows that the more examined decisions will likely be the better ones.
I’m actually a strong advocate for mindfulness, especially in the workplace, but these sort of vapid statements do not help people understand mindfulness nor implement the sort of changes that can have a positive impact.
I read an interesting article in the New York Times that said For Big-Data Scientists, ‘Janitor Work’ Is Key Hurdle to Insights. For anyone that has done any sort of science this should not be a surprise. Science is the product of a relentless sort of focussed stupidity combined with intelligent insight and this sort of basic work is essential to the first part of the process.
Anyone who has done a science experiment, even a simple high school one, will understand that the act of data gathering and sorting is a menial task that cannot be simply automated. As a scientist, you are often playing the role of a highly trained monkey repeatedly doing the same tasks. Normally this sort of activity would be one that could be quickly automated but there turns out to be so much art in the science that you are often the only trained monkey who can do it.
Whether, as in the article, it’s different words meaning the same thing for a company providing information on drug side effects needing to know that “drowsiness,” “sleepiness” and “somnolence” all meant the same thing or whether it’s a researcher analysing EEG patterns determining whether the activity is real or just an artefact, both situations are remarkably resistant to complete automation. Even if you could automate everything, it’s only when you’ve ploughed through the dataset that you start to have an idea of what’s going on – and that’s ignoring the green jelly bean problem of post hoc analysis.
For my PhD I manually compared 1.1 million EEG patterns and that was after the computer had processed and tentatively categorised the data. I’d go to sleep seeing wiggly lines across the back of my eyelids but that was the price to pay for correctness that no computational analysis could match.
It’s fantastic that there are start-ups out there looking to solve this problem of data wrangling (as not everyone has an army of PhD students, postdocs or other slave labour to do this for them) but it’s an exceptionally tough problem that I don’t expect to see solved any time soon.
Java 8 was released recently and one of the much heralded features is Default methods. Simply put, these allow you to inject functionality to classes via interfaces. This is a new feature in that prior to Java 8, Interfaces could only have method signatures and not have the implementation and only Abstract classes could have method implementations but that has now changed.
Here’s a simple example that prints out “In InterfaceOne”
In Scala, another JVM language, you can do something similar but more awesome with Scala’s traits (its version of interfaces) you can override similar implementations and have a version of multiple inheritance. This example prints out “In TraitTwo”
The secret that avoid that diamond of death is that whichever trait is declared last wins. If I’d swapped around the order that the traits were declared then it would have printed “In TraitOne”
What is even nicer in Scala is that you can declare your class with traits when you instantiate it. You don’t have to declare it at the compile time of the class. This means that you have a powerful way to extend functionality of classes but without the insanity of monkey patching. ThisThe below example also prints out “In TraitTwo” but the class does not extend any trait. This of course is Scala’s cake pattern where you can mix in the traits.
What is slightly disappointing with Java 8 is that you cannot mimic this behaviour. If you try to do this in Java 8, you get a nice compile time error telling us the class inherits unrelated defaults.
I wonder why they chose not to support this. The cake patterns seems like a feature that adds flexibility without being able to shoot yourself in the foot too much.
Just as an experiment I thought I would try using DuckDuckGo for searching. In an increasingly big brother world I like that they don’t track you and I like that they only serve requests securely (requests to http://duckduckgo.com get a 301 permanent redirect to https://duckduckgo.com). They also offer a proxy service and the search results seem to be pretty good.
I came across an interesting post here on why Betfair should fear whatsapp. Strange bedfellows for an article but the general thrust of the piece is that a chat application is similar to a financial exchange, in this case Betfair, and could theoretically disrupt its business model. I have to be honest and say it’s something I’ve never thought of.
If you read what’s said then you might think that this is so. The author goes into some detail about the similarities and given the depth of knowledge in some of the other posts on his site you might think that he’s got a point. He even says straight out that “A betting exchange is really just a chat application – in place of messages, you have bets; in place of chat groups, you have markets; and in place of the chat transcript, you have the order book“.