How to get started advising startups

A few people have asked me about my work with Startmate, Australia’s leading startup accelerator1, and how they can get involved with investing in or mentoring startups.

Today I wanted to answer this and share a bit about how I engage with startups as a mentor/advisor in Startmate.

Startmate All Hands, March 2019
Startmate All Hands, March 2019

How can you get involved?

A startup incubator is a great place to start because it provides a way to connect with the startups, filters them for quality, and allows you to invest as well.

I started with Startmate by applying to be a mentor, where I invested and joined to mentor the Melbourne cohort in 2018. Incubators/accelerators like Startmate work by the mentors putting money into a fund (minimum $10k per round), which is then invested in the startups.

Startmate also hosts “office hours” for mentors, which are 30 minute Zoom calls with founders arranged in a block. Participating as a mentor there can help build your network of founders and meet people outside the incubator.

Alternatively, you could advertise your availability as a mentor/advisor on LinkedIn. A few people have reached out to me on LinkedIn since I added startup advisor to my bio there.

Another approach is to find teams yourself and offer to help them out. One team I now advise I found initially via their YouTube videos. After a few chats and a visit, I offered to lead an angel investment for them.

When you reach out to an incubator or advertise yourself on LinkedIn, the first question you’ll be faced with is - what can startups learn from you?

Do you have relevant experience?

Startup founders want to learn from people who have been in the same situation before, or who can help them make progress quickly. The typical categories of startup advisors are:

  • Other founders - help with starting the business, hiring early employees, what to focus on/ignore.
  • Investors - help with identifying what the company needs to get venture funding (hint: a product with customers and revenue!), or by actually giving them money.
  • Domain experts - this could be people in a particular industry (e.g. doctors for a medical startup), or people with particular skills (e.g. CTOs for a software startup).

If you’re not a founder, an investor, or an expert in something immediately relevant to a startup, you should probably find something else to do with your time.

If you're not a founder, an investor, or an expert in something immediately relevant to a startup, you should probably find something else to do with your time.

In my case, I’ve found my experience as an early employee at Atlassian is occasionally useful, but not as useful as the experience of mentors who have actually been founders. So the main thing I bring is my skill with product and software development – helping teams identify their customer needs and working with devs to turn those into working, viable products.

The gotcha with having specific expertise like this is that you’ll need to spend a fair bit of time to get to know the startup and their industry, to make sure your advice is relevant. So the next question is whether you want to make that commitment or not.

Can you spare the time and focus?

Startup incubators often emphasise how little time it takes to be involved as a mentor, and they do a lot of work to make things easy for us. But I’ve found that I have to spend more than the minimum time to get the most out of mentoring, both for the startups and for myself.

I've found that I have to spend more than the minimum time to get the most out of mentoring, both for the startups and for myself.

My typical approach to mentoring in Startmate starts with the applications:

  • Reviewing applications (5-6h). I typically spend 5-6 hours one evening reviewing Startmate submissions and voting on them. This includes looking at websites, pitch views and reading details about dozens of teams to try to find the best ones. Startmate was getting over 200 applications when I was a mentor, but I could only review perhaps 20-30 in one night.
  • Interview Day. The most important event to attend for Startmate is Interview Day, where the top 30 teams who applied will attend (or meet over Zoom, more recently) to give their pitch to you. As a mentor, I find it helps to spend a bit of time in addition to the day itself.
    • Interview day prep (3-4h). The night before interview day, I look through my interview schedule and look up each team’s website or application. Then I jot down questions I can ask them so, rather a rehearsed pitch, we can jump straight into the meat. How many customers do they have? What is their go-to-market strategy? Which competitor are they most worried about? This takes 3-4 hours.
    • Attend interview day (3h). Interviewing 15 teams for 10 minutes each takes 3 hours, so this is gruelling but incredibly fun. During each interview, I take notes on good and bad aspects, looking for teams that are high functioning and going after a good opportunity. The key question: would I invest my own money in this company?
    • Send follow-up emails (2h). After interview day, I have usually found a couple of startups that I personally connected with. I jot some of my thoughts in an email and send it over to the founder, so they have some impartial feedback on their company, even if they don’t make it into the actual incubator. This takes about 1-2 hours to write and send 3-4 detailed emails.

By the time the teams are selected for inclusion in the cohort, I’ve usually worked out which teams I want to spend more time with. In the most recent cohort, Startmate introduced the idea of a “squad”, which were a designated group of mentors for each company. So in this round, the teams I focused on were my two squads, each with a weekly meeting, and three others I met with occasionally through the program.

Mentor activities throughout the program include:

  • Reading weekly updates. Each team send a weekly update on their progress, including metrics and good/bad/ugly happenings for their company. At the start of the program I read all of them, to get familiar with each team, and later on I focus on the ones I’m following most closely.
  • Squad meetings. The squads I participated in for Startmate MEL/NZ 2020 had weekly meetings for most of the program, and this was quite helpful for keeping up to date in Covid times, when we could only meet via Zoom.
  • Sending suggestions via email. My go-to format for sending feedback to founders is via email. It gives you a chance to spell out the situation you saw, your suggested tactics, and lets them consider your suggestions and reply and ask questions if needed.
  • Try out products and send feedback. As founders develop their products, they often drop links or examples in their email updates or a Slack channel. Going click-by-click through a team’s product and writing down your thoughts as you go is often invaluable for early stage products.
  • Chats with teams. As you send suggestions and feedback and build relationships with the teams, I’ve found they will start asking me for advice proactively. This is the real value of the program, when your experience as a mentor can really help teams move fast and avoid pitfalls that are obvious to you, but not to them.

In the past, when we had in-person cohorts, I also used to occasionally pop in to attend the weekly All Hands meetings, to learn about each team’s progress and meet with them before or after. I found this incidental face time with the teams really valuable.

As part of the Startmate program, there is also the option to share your expertise via a presentation to the cohort, but I’ve found 1:1 conversations to be the most effective way to engage with the teams with my background.

Altogether, mentoring as a product/dev advisor takes at least 2-3 hours per week to do all the above.

What are the benefits of mentoring?

If startup mentoring now seems like a lot of work, it is. But it’s rewarding too.

First, personally, it’s fun to make friends with people starting companies across a wide variety of industries. You can meet new people, learn about their challenges, and try to help them succeed. When they do succeed (and many of them do), you feel like you contributed to that success in some small way as well.

Second, you’re scaling your experience across many teams and helping them advance society in more ways than you can do as an individual. By helping build a community of entrepreneurs, which themselves return and help future entrepreneurs, you’re helping create thousands of world-class innovations now and in the future – to improve the world for everyone!

By helping build a community of entrepreneurs, which themselves return and help future entrepreneurs, you're helping create thousands of world-class innovations now and in the future -- to improve the world for everyone!

Lastly, but probably least important to me, there will eventually be a financial benefit for investing your money and effort in the program. Although no Startmate companies have made it to IPO yet, there have been some successful secondary sales and many are on the path to get there in the long term.

Good luck on your mentoring journey!

I hope this information is helpful in getting you started, and I look forward to seeing you soon at a Startmate or other startup event in our community.

If there are any mistakes above or additional things that should be included, or you have suggestions about other startup topics to write about, please shoot me an email.

  1. And, as of 2020, now also in NZ! 

Fixing videos without sound on 2nd generation Apple TV

I was trying to watch some TV episodes recently that wouldn’t play sound through my second generation Apple TV due to their audio encoding. Fixing them turned out not to be too hard, but working it out took a while, so here it is documented for posterity.

The MP4 video files had AC3 5.1 surround sound audio, shown as “AC3 (6 ch)” in Subler. However, the 2nd gen Apple TV only supports playing stereo audio over HDMI, and 5.1 audio only works via “pass through” on the optical output to an AV receiver (when enabled via Dolby Audio in the Apple TV settings). I don’t have an AV receiver or anything else hooked up to the optical port on my Apple TV. So playing these files on the Apple TV results in no sound being sent via HDMI, and no sound for me while watching these videos.

The fix is to reencode the audio as AAC stereo, while passing through the video and subtitle streams without modification. Install ffmpeg via Homebrew, then run the following command:

ffmpeg -y -i file.mp4 -map 0 -c:v copy -c:a aac -ac 2 -c:s copy file-fixed.mp4

The arguments are as follows:

  • -y – overwrites any existing output file
  • -i file.mp4 – input file
  • -map 0 – sends all streams from the first (and only) input file to the output file
  • -c:v copy – uses the “copy codec” for the video stream, which means pass it through unchanged
  • -c:a aac -ac 2 – uses the AAC codec for the audio stream, with just 2 audio channels
  • -c:s copy – copies the subtitle tracks (if any)
  • file-fixed.mp4 – the output filename.

Looping this over all my files fixed the soundtrack, which appeared afterwards as “AAC (2 ch)” in Subler. It also shaved about 100 MB off the file size of each. I was happily watching the TV episodes (with glorious stereo sound) on my old Apple TV soon after.

Credit to this Stack Overflow post for leading me down the right track.

Six small improvements in iOS 6

Continuous improvement is a big part of why I continue to buy and advocate Apple products. So after upgrading to iOS 6 on my iPhone 4S, I was curious to see what small things had been tweaked and changed across the OS.

More aggressive auto-dim

One of the first things I noticed after the upgrade was that the lock screen was noticeably dimmer when I first pressed a button to wake the phone up. I’m usually using my phone outdoors, so I typically have my phone configured with maximum brightness. After the upgrade, however, the lock screen definitely wasn’t at maximum brightness.

It appears that iOS 6 is more aggressive with the iPhone auto-dim setting, particularly when waking from sleep. For me, this is a small but noticeable improvement, because the screen is no longer so extremely bright when I wake my phone at nighttime to check the time.

This should also make for a slight improvement in battery life. If your phone is in your bag or pocket and jostling makes it wake up from time to time, the dimmer lock screen should result in less wasted battery.

Improved battery life

My iPhone also seems to be getting much better battery life now that it is running iOS 6. I used to finish a day of work with intermittent use of my phone at around 20-30% battery remaining. After upgrading to iOS 6, I’m seeing it more often at 50-60% at the end of the day.

This is great news for people like me, who occasionally forget to charge their phone overnight, and are left struggling through a second day trying to minimise phone use so the device doesn’t die.

New emoji

For what initially seemed just a gimmick to me – the introduction of emoji in iOS 5 – these little characters have started popping up everywhere. In text messages to my friends and family. In emails. Even in nicknames on the intranet at my work.

With iOS 6, Apple has added even more emoji to the system, including many new faces, foods, and other objects. Adding a bit of colour to your text messages just became even more fun.

Songs for alarms

I was starting to get tired of waking up to the same duck quacking, guitar strumming and blues piano chords. So it was really past time for Apple to support using songs from your music library as an alarm tone – a feature that other phones have had for years.

My advice? Just make sure you and your partner choose different songs for each day so you don’t feel like Bill Murray in Groundhog Day when “I Got You Babe” starts playing at 6:00 every morning.

Spelling correction for keyboard shortcuts

One of the nicest and least publicised features of iOS 5 was the addition of text expansions, known as “keyboard shortcuts” in Apple parlance. Found in Settings > General > Keyboard, you can configure as many shortcuts as you want. I have just one, a tweak to the one which is shipped by default: “omw” converts to “On my way” (without an exclamation mark).

In iOS 6, these shortcuts are now registered with the automatic spelling correction. So if I type “onw” by mistake, in my rush to wherever I’m going, iOS now corrects it and expands it to “On my way” correctly. Such a small change, but one which makes a big difference to me.

Panorama photos

While the last item on my list isn’t a small feature, it’s something I’ve found incredibly useful in the past week: panorama photos. There are plenty of sites that go into detail about how they work and provide stunning examples, but I’m just pleased to be able to capture a great photo of a vista while bushwalking or my surroundings in the city.

Expect to see panorama photos popping up everywhere now that Apple has put this tool into the hands of every amateur iPhone photographer.

Summary

Overall, I’m really happy with the upgrade to iOS 6. The controversial Maps update has not proved a problem to me in my usage so far, and all the little things above make using my phone a better experience.

Also: Three things about iOS 6

Why I use Firefox

After trying Chrome for a couple of weeks on my laptop, I’m back to Firefox again as my main browser for day-to-day and development use. The drawbacks of Chrome for my everyday browsing far outweighed the benefits in speed and development tools.

There are really just two reasons why I keep coming back to Firefox. I thought it might be useful to note them, and perhaps some browser developers might read this and get some feature ideas for their next version.

Awesome bar

The single best feature of Firefox and the number one reason why I continue to struggle with any other browser is Firefox’s aptly-named Awesome Bar. Unlike the smart address bars found in Chrome and Safari, Firefox’s has an uncanny ability to immediately suggest the page I’m looking for, after just typing a word or two.

While I don’t know all the details of how the Awesome Bar works internally, but some of the features that I find useful are immediately obvious when I try to use the smart address bar in Chrome.

Firstly, Firefox prioritises page history over web searches. Like everyone else, I do use the address bar to launch web searches, but when I do so I just hit enter – I don’t wait for the address bar menu to display a list of suggestions. Chrome seems to prioritise web search suggestions over pages that are in your history, which makes most of the options in the dropdown pretty useless to me in practice.

Below are two screenshots of what happens when I type ‘bootstrap’ in the Chrome and Firefox address bars, after having browsed the Twitter Bootstrap site several times in both browsers. Chrome shows three useless search suggestions, and Firefox lists out several pages that I’ve been to in the past and am fairly likely to be looking for.

Screenshot of Chrome's address bar search with irrelevant search suggestions
Chrome's address bar not-so-awesome search
Screenshot of Firefox's address bar with useful suggestions from my browsing history
Firefox's awesome bar with useful results from my history

Note also in the screenshot above, how Firefox gives me the option to switch to the “Examples” page which is open in an existing tab. When you have thirty or forty tabs open (as I frequently do – see below), being able to switch to an open tab instead of duplicating it is a great feature.

Secondly, Firefox’s Awesome Bar uses a combination of both the page title and the URL when matching against your browsing history, and it does substring matches on both. This means I can type something like ‘jira CONF macro’ and see a list of Confluence issues on our issue tracker containing the word ‘macro’ that I’ve been to recently. Chrome’s address bar seems to only search URLs, which is far less useful.

Another screenshot of Firefox's address bar with useful suggestions from my browsing history
Firefox awesome bar with JIRA issues I might be looking for

The most infuriating aspect of Chrome’s search suggestions is that they change after you’ve stopped typing. The most irritating example is this situation, which has happened to me several times:

  • you’re busy typing out some words which should match a page in your history
  • you see the page you want in the suggestions list, so you stop typing
  • as you go to use the cursor keys to select it, the suggestion disappears and gets replaced with useless suggestions from Google
  • you curse Google for making their suggestions feature so frustrating.

I consider the Awesome Bar among the most important productivity tools on my computer. When I was using Chrome, locating a page that would normally take me two or three seconds in Firefox would take minutes, often requiring navigating back through websites to the page I was reading earlier.

In Chrome, the address bar search seems hamstrung by Google’s desire to promote its search engine to the detriment of more relevant suggestions within the browsing history of the user.

Vertical tab tree

A large proportion of my work day consists of working with web applications and reading information on web sites. As such, I tend to accrue a large number of tabs in my browser. Horizontal UI components, like the typical tab bar in a web browser, are not designed to cope with a long list of items.

With Firefox, there’s an amazing extension called Tree-style tabs. This extension displays your tabs vertically on the left side of the window, where you can fit maybe 30-40 tabs, all with readable titles and icons on them. It also automatically nests tabs underneath each other, as you open a link in a new on a page you’re looking at. This helps group related tabs together in the list, as you open more and more of them.

Screenshot of my Firefox tab tree
Manageable tabs: Firefox tree-style tabs extension

Even with this extension, however, the situation isn’t all roses. Any browser with a lot of tabs open starts to consume a lot of memory, and every few days I need to restart Firefox to get it back to an acceptable level of performance. All my tabs are restored, but it seems the various slow memory leaks which accrue in the open windows are resolved by the restart.

The extension is also flakey in various ways, particularly when dragging tabs around. I have tried Chrome’s vertical tabs secret feature, but that doesn’t work very well currently. If there was another solution available anywhere else that provided similar functionality, I’d gladly try it out.

A related small improvement I’ve noticed in recent versions of Firefox is that it no longer attempts to reload all the tabs when you reopen your browser. This is a welcome improvement, particularly on slow connections, where the dozens of tabs you have open don’t use any bandwidth until you switch to them.

Looking forward

In the future, I hope the other browsers will catch up with similar productivity features. For someone who lives in their browser like I do, I can put up with a lot of other drawbacks for such significant features.

In particular, I would really like to use Chrome for a bunch of different reasons: the process-per-tab model, faster rendering, some more advanced developer tools, and its generally faster adoption of new web technologies.

Also: Ten things every web developer should know.

Three things about iOS 6

iOS 6 icon

Watching last week’s keynote at Apple’s WWDC conference, I was struck by how much Apple continues to make great iterative improvements to their software and hardware products. Particularly on the software side, this makes it great to be a consumer of their products. They ship a functional, and in some ways minimal, first version of a product, and then they continue to incrementally improve on it, year after year, until the result is something far superior to everything else on the market.

With that in mind, three of the improvements that excited me most in iOS 6 were things that most people probably dismiss as small unimportant tweaks, but they’re changes that I can see making a big difference to how I use my iOS devices every day.

iCloud tabs

iCloud tabs is the first improvement which solves a small problem I hit all the time. I’m browsing on my iPad over breakfast in the kitchen, then go into the office to do some work. As soon as I sit down at my computer, I remember that I need to finish reading that page I was reading on my iPad earlier. iCloud tabs provides a button in Safari and Mobile Safari to open up a page that is open on any of your other devices.

Screenshot of iCloud tabs menu in Mobile Safari on an iPad
iCloud tabs: get access to open web pages across your devices

Other browsers have tab synchronisation features, particularly across PCs, but I think this is a particularly elegant way of solving this problem between multiple different devices. You click the cloud button on your browser toolbar on any of your devices, and it shows you a list of the tabs currently open on any of your other devices. Neat.

Actions when declining phone calls

A common problem for all mobile phone owners is dealing with unwanted calls. Your pocket starts vibrating – or worse, ringing – while you’re talking with someone or sitting in a meeting.

The iPhone has always had a simple facility to dealing with this immediately, even when the phone is in your pocket. You can hit the sleep or volume-down button once to silence the call, and twice to decline it. The problem was that it was very easy to forget to return a call after you’ve declined it. For those like me who don’t use voicemail, or who have friends who won’t leave a message, this is particularly problematic.

In iOS 6, Apple adds a choice of actions you can perform when declining a call via the touchscreen. It gives you the following choices:

  • reply with message
  • remind me later.
Screenshot of new decline call options on an iPhone
New options when declining a call on your iPhone

The message option gives you a set of canned messages: “I’ll call you later”, “I’m on my way”, or “What’s up?”, as well as the ability to write a custom message. The reminder option can create a reminder for one hour, or one based on a geofence: current location, home, or work.

This is going to be incredibly useful to me when I need to decline a call at the office. Setting a reminder so I remember to call the person back after an hour, or sending Liz a message to tell her I’m leaving soon will be great.

Facebook calendar integration

The last feature I’m looking forward to is a component of one of the major features in the new OS: Facebook integration. My particular problem is that I’m occasionally accepting Facebook invites from friends to attend their events, but neglecting to add that event to my calendar. This leads to the situation where I plan to go away or double-book myself for an event I’ve already accepted.

I’m looking forward to having those Facebook events visible in the calendar on my phone, so I won’t accidentally make clashing appointments again in the future.

Summary

From the first time I saw the iPhone, presented by Steve Jobs in January 2007, I noticed so many small useful things that I was certain it was going to be the best phone for me. Aside from all the phone’s features, it was the clever behaviour of the device in a million different circumstances that won me over.

Apple continues on the same track with the updates in iOS 6. As well as a few large features, they’ve continued to improve the software in many ways that are going to help their customers every day. This focus on improvements that perfectly address the needs of their customers are why I continue to recommend the iPhone and iPad to everyone I speak with.

Related: How Apple views the web.

Stalingrad

Last week I finished a book that has been on my reading shelf for a very long time, Stalingrad by Antony Beevor. It’s an account of the epic and tragic siege of the city of Stalingrad (now Volgograd) in World War II.

The book opens with a gripping overview of Operation Barbarossa: Nazi Germany’s invasion of the Soviet Union which launched in June 1941. The Wehrmacht quickly overwhelmed the unprepared Soviet defenses and their tank armies rolled across the steppe in present day Ukraine, Belarus and Russia over the next few months.

Beevor’s skill is in tying the narrative of the campaign’s progress in with the personal writings and opinions of individuals involved in it:

In the first few days of Barbarossa, German generals saw little to change their low opinion of Soviet commanders, especially on the central part of the front. General Heinz Guderian, like most of his colleagues was struck by the readiness of Red Army commanders to waste the lives of their men in prodigious quantities. He also noted in a memorandum that they were severely hampered by the ‘political demands of the state leadership’, and suffered a ‘basic fear of responsibility’. … All this was true, but Guderian and his colleagues underestimated the desire within the Red Army to learn from its mistakes.

Very soon into the book though, the reader is faced with stories from the grim reality of war on the eastern front:

Contrary to all rules of war, surrender did not guarantee the lives of Red Army soldiers. On the third day of the invasion of the Ukraine, August von Kageneck, a reconnaissance troop commander with 9th Panzer Division, saw from the turret of his reconnaissance vehicle, ‘dead men lying in a neat row under the tree alongside a country lane, all in the same position – face down’. They had clearly not been killed in combat. …

Officers with traditional values were even more appalled when they heard of soldiers taking pot-shots at the columns of Soviet prisoners trudging to the rear. These endless columns of defeated men, hungry and above all thirsty in the summer heat, their brown uniforms and fore-and-aft pilotka caps covered in dust, were seen as little better than herds of animals.

Of course, there are equally awful stories of atrocities on both sides. After dealing with the background of Barbarossa and Operation Blue, the situation at Stalingrad starts to pan out.

Beevor’s detail here is helped by “a wide range of new material, especially from archives in Russia”, as he describes the situation in the city under siege:

‘The fighting assumed monstrous proportions beyond all possibility of measurement,’ wrote one of Chuikov’s officers. ‘The men in the communication trenches stumbled and fell as if on a ship’s deck during a storm.’ …

‘It was a terrible, exhausting battle’, wrote an officer in 14th Panzer Division, ‘on and below the ground, in ruins, cellars, and factory sewers. Tanks climbed mounds of rubble and scrap, and crept screeching through chaotically destroyed workshops and fired at point-blank range in narrow yards. Many of the tanks shook or exploded from the force of an exploding enemy mine.’ Shells striking solid iron installations in the factory workshops produced showers of sparks visible through the dust and smoke.

Despite ultimately only controlling only a narrow strip of land next to the Volga river, Chuikov’s 62nd Army managed to resist granting Stalingrad to the Germans. While the Germans’ attention was focused on claiming the prize of Stalingrad – the city bearing Stalin’s name – Soviet commanders Zhukov and Vasilevsky coordinated a massive counterattack and encirclement of the entire German Sixth Army, called Operation Uranus.

Again here, Beevor’s level of detail around happenings in Moscow and in the lead-up to Operation Uranus are impressive. He also has chilling anecdotes about the willful ignorance of the Nazi leadership:

During the summer, when Germany was producing approximately 500 tanks a month, General Halder had told Hitler that the Soviet Union was producing 1,200 a month. The Führer had slammed the table and said that is was simply not possible. Yet even this figure was far too low. In 1942, Soviet tank production was rising from 11,000 during the first six months to 13,600 during the second half of the year, an average of over 2,200 a month.

The German leadership’s ignorance of conditions on the ground proves to be its downfall, with strategic mishaps and miscommunications allowing the Soviet Union to strike back.

The Soviet armies easily overpowered the weak units on the flanks of the Sixth Army, and completely surrounded them around Stalingrad. The region containing the trapped army was called the Kessel, German for cauldron, and consisted of a staggering number of troops:

The Russians, despite all the air activity over the Kessel, still did not realise how large a force they had surrounded. Colonel Vinogradov, the chief of Red Army intelligence at Don Front headquarters, estimated that Operation Uranus had trapped around 86,000 men. The probably figure … was nearly three and a half times greater: close to 290,000 men.

Operation Uranus started in late November 1942, and by the time the Sixth Army was surrounded, it was in the depths of the Russian winter. The army was ravaged by the freezing weather, plunging to minus thirty degrees Celsius, as infrequent airlifts of supplies failed to provide the necessary amount of supplies to keep the men fed:

The bread ration was now down to under 200 grams per day, and often little more than 100 grams. The horseflesh added to ‘Wassersuppe’ came from local supplies. The carcasses were kept fresh by the cold, but the temperature was so low that meat could not be sliced from them with knives. Only a pioneer saw was strong enough.

The combination of cold and starvation meant that soldiers, when not on sentry, just lay in their dugouts, conserving energy. … In many cases, however, the lack of food led not to apathy but to crazed illusions, like those of ancient mystics who heard voices through malnutrition.

It is impossible to assess the numbers of suicides or deaths resulting from battle stress. Examples in other armies … rise dramatically when soldiers are cut off, and no army was more beleaguered than the Sixth Army at Stalingrad. Men raved wildly in their bunks, some lay there howling. Many, during a manic burst of activity, had to be overpowered or knocked senseless by their comrades.

The story is so powerful because it is real. From the Soviet prisoners of war left to starve in labour camps, to the anonymous wounded soldiers who are held back from departing planes by the sub-machine guns of the Feldgendarmerie, each page of this book reveals more from this awful chapter of humanity. It’s a story that must not be forgotten if it is to remain unrepeated.

I strongly recommend reading Stalingrad. It’s an epic and tragic story, but one that makes you appreciate the peace and safety that we enjoy today.

If you’d like to be updated when I next publish something here, you can follow me on Twitter.

The ideal iteration length, part 2

In my previous post on the ideal iteration length, I looked at how iteration length affected our development of Confluence at Atlassian. I also gave my definition of an iteration:

An iteration is the amount of time required to implement some improvements to the product and make them ready for a customer to use.

When I started at Confluence in 2006, getting improvements ready for customers only happened irregularly, and we were unlikely to have anything release-worthy until close to the end of each multi-month release cycle. Through 2008–2010, we worked on a system of regular two-week iterations with deployments to our internal wiki, called Extranet. Selected builds were released externally for testing as well. This worked well, but we were still looking to improve the process.

Moving faster

In early 2011, we started looking at how we could get internal deployments available more quickly from our development code stream. There were two main sticking points:

  1. Upgrading the internal server meant taking it offline for up to 10 minutes while the upgrade was done. This was usually done during the day, so the dev team would be around to help out with any problems, but was a bit more inconvenient for everyone else.
  2. The release process still involved a bunch of manual steps which meant that building a release took one or two days of a developer’s time.

The first problem was solved with some ingenuity from our developers. We managed to find a hack where we could disable certain features of the application and take a short-term performance impact in order to do seamless deployments between near-identical versions of the software. We had to intentionally exclude upgrades which included any significant data or schema changes, but that still allowed the majority of our internal micro-upgrades to be done without any downtime.

The second problem was solved just with more hard work on the automation front. We hired a couple of awesome build engineers, and over the course of a few months, they’d taken most of the hard work out of the release process. In the end, we had a Bamboo build which built a release for us with a single click.

Once these problems were resolved, we moved our team’s Confluence space on to its own server with the seamless deployment infrastructure. We have now been deploying Confluence there with every commit to our main development branch for more than a year.

The ability to have our team’s wiki running on the latest software all the time is incredible. It enables everyone in our team to test out new functionality on the instance, confident that they’re playing around with the latest code. It allows someone to make a quick change and see it deployed immediately in the team’s working area and see what kind of improvement it might make.

Bug fixing is transformed by the ability to deploy fixes as quickly as they’re implemented. If a serious problem arises due a deployment that just went out, it is often simpler and faster to develop a fix and roll that change out to the server. That reduces unnecessary work around rolling back the instance to its previous version, and shortens the feedback loop between deployment of a feature and the team discovering any problems with it. In the long term, we’ve found that this improves the quality of the software and encourages the team to consider deployment issues during development.

Atlassian’s Extranet wiki, used by the entire organisation, has just moved on to our seamless deployment platform. I’ll have to report back later on how that pans out, but we’re optimistic about how it will help us deliver faster improvements to the organisation.

One-week iterations and continuous deployment

Late in 2011, Atlassian launched a new hosted platform called OnDemand. One of the most significant improvements for us internally with the new platform was a great new deployment tool called HAL. HAL supported deploying a new release on to a given instance via a simple web interface, and could roll out upgrades to thousands of customers at a time very easily in the same manner.

The OnDemand team at Atlassian now has a weekly release cycle, which is primarily limited by our customer’s ability to tolerate downtime, rather than any technical limitation.

In the Confluence team, we’re aiming to push out new parcels of functionality to these customers on that same timeframe, reducing our iteration length from two weeks to one, and reducing the time to ship new functionality to customers from a few months down to a week.

We have some problems with moving to this faster iteration model:

  • making sure all the builds are green with the latest code sometimes takes a couple of days, meaning the release process needs to wait until we confirm everything is working
  • our deployment artifact is a big bundle of all our products, so if a bug is identified late in any of the products, deployment of all of them might be delayed
  • we’ll be releasing any code changes we make to thousands of customers every week, rather than just internally.

Each problem requires a distinct solution that we’re working through at the moment.

For the first, we’ll be trying to streamline and simplify our build system. In particular, we want to make the builds required to ensure the functionality is working on OnDemand much more reliable and streamlined.

On the second problem, we’re looking to decouple our deployment artifacts so the products can be deployed independently. We would like to go even further than the product level, so we can update individual plugins in the product or enable specific features through configuration updates as frequently as we like.

The final problem requires us to ensure our automated tests are up to scratch and covering every important area of the application. It’s important that we also continue to extend the coverage as we add new functionality – often a challenge with cutting edge functionality. The platform provides an extremely good backup and restore system, so we also have a good safety net in place in case there are any problems.

What are the benefits of moving to a faster or continuous deployment model? They’re very similar to the benefits we first saw with the move to a two-week iteration cycle, just bringing them now to our customers:

  • customers will see small improvements to the product appear as soon as they are ready
  • bugs can be identified and fixed sooner, and those fixes made available to customers sooner
  • we can deploy specific chunks of functionality to a limited set of customers or beta-testers to see how it works out
  • releases for installed (also called “behind the firewall”) customers will contain mostly features that have already been deployed in small chunks to all the customers in OnDemand, reducing the risk associated with these big-bang releases.

That sums up the work the team is doing right now, to try to make this all possible.

What is the ideal iteration length?

Back to the original question then: what is the ideal iteration length? Let’s consider the various types of customers we have, and what they might say.

We certainly have some customers who will want to be on the bleeding edge and trying out the latest features even though it means some inconsistencies or minor bugs occasionally. We certainly prefer to run our internal wiki that way. These customers want to have the changes released as soon as they’re implemented – as short an iteration length as possible.

On the other hand, there are customers, particularly those running their own instance of Confluence, who prefer to upgrade on a schedule of months or years. These customers want stability and consistency and would prefer to have fewer features if it means more of the other. For these customers, an iteration length of several months might be too fast.

Most of our customers sit somewhere in the middle of these two extremes.

What we’ve concluded after all this work is that the decision on speed of delivery should be in your customers’ hands. Your job as an engineering team is to ensure there is no technical reason why you can’t deliver the software as often as they’d like, even if that is as fast as you can commit some changes to source control.

That way, when your customers change their minds and want to get that fix or feature right now, there’s no reason why you have tell them no.

Thanks for reading today’s article. If you’d know when I write something next, you can follow me (@mryall) on Twitter.

The ideal iteration length, part 1

In the Confluence development team at Atlassian, we’ve played around with the length of iterations and release cycles a fair bit. We’ve always had the goal to keep them short, but just how short has varied over time and with interesting results.

The first thing you need to define when discussing iteration lengths is what constitutes an iteration. I define it as follows:

An iteration is the amount of time required to implement some improvements to the product and make them ready for a customer to use.

There are various areas of flexibility in this definition that will depend on what your team does and who the customer is. For some teams, the “customer” may be other staff inside the organisation, where you’re preparing an internal release for them each iteration. For some teams, the definition of “improvements” might need to be small enough that only a little bit of functionality is implemented each time.

In every case, an iteration has to have a deliverable, and ideally that deliverable should be a working piece of software which is complete and ready to use.

On top of the typically short “iteration cycle”, we have a longer “release cycle” for our products at Atlassian. This is to give features some time to mature through use internally, and helps us try out new ideas over a period of a few months before deciding whether something is ready to ship to our 10,000 or so customers.

Long (multi-month) iterations

When I first started at Atlassian in 2006, the release process for the team was built around a release with new features every 3–4 months. There were no real deliverables from the team along the way to building this, so in practice this was the iteration length. Occasionally, just prior to a new release, we’d prepare a beta release for customers to try out. But that was an irregular occurrence and not something we did as a matter of course.

There were a few problems with this approach:

  • the team didn’t have regular feedback on their progress
  • it was hard for internal stakeholders to see how feature development was progressing
  • features would often take longer than planned, requiring the planned released date to be pushed back.

You could say that the first two points actually led to the third, since the team and the management had little idea of their overall progress, it was easy for planned release dates to slip at the last minute.

Late in 2007, we tried to address these problems by introducing regular iterations with deliverables into our process.

Two-week iterations

Here’s what our team’s development manager wrote to the company when we started building a release of our software every two weeks and deploying it to our intranet wiki, called Extranet:

We are releasing Milestone releases to EAC every two weeks, usually on Wednesdays. This means that EAC always runs the latest hottest stuff, keeping everyone in the loop about what we are currently developing. Releasing regularly also helps the development team focussing on delivering production-ready software all the time - not just at the end of a release cycle. We aim at always providing top quality releases, and we are certainly not abusing EAC as a QA-center.

Along with this was a process for people to report issues, and some new upgrade and rollback procedures that we needed to make this feasible.

Basically, our team moved into a fairly strict two-week cycle for feature development. Every two weeks, we’d ensure all the features under development were in a stable enough state to build a “milestone” build. This milestone would be deployed to our intranet and made available to our customers via an “early access programme” (EAP).

Initially, this took a lot of work. When building features earlier, on a longer iteration cycle, we’d often be tempted to take the entire feature apart during development then put it back together over a period of months. This simply doesn’t work with two-week iterations, where the product needs to be up-and-running on a very important internal instance every two weeks.

The change was mostly one of culture, however. As we encouraged splitting up the features into smaller chunks which were achievable in two weeks, the process of building features this way became entrenched in the team. The conversations changed from “how are we going to get this ready in time?” to “what part of the feature should we build first?”

This two-weekly rhythm gave us the following benefits over a longer iteration period:

  • the team had a simple deadline to work towards – make sure your work is ready to ship on every second Wednesday
  • features were available early during development for the entire organisation to try out
  • issues with new features were identified sooner by the wider testing
  • the release process was standardised and practiced regularly
  • customers and plugin developers got access to our new code for testing purposes sooner
  • releases tend to hit on or very close to their planned dates, with reduction in scope when a given feature wasn’t going to be ready in time.

However, there also seemed to be some drawbacks with our new two-week iteration process:

  • large architectural changes struggled to get scheduled
  • large changes that couldn’t be shipped partially complete (like the backend conversion from wiki markup to XHTML) had to be done on a long-lived branch
  • the focus on short-term deliverables seemed to detract from longer term discussions like “are we actually building the right thing?”

Looking at each of these problems in detail, however, showed that none of them are actually directly related to the iteration length. They are actually problems with our development process that needed to be solved independently. The solutions to them probably deserve their own posts, so I’ll leave those topics for the moment.

As I mentioned above, there were some prerequisites for us to get to this point:

  • We needed a customer who was okay receiving changes frequently. It might take some convincing that releasing more frequently is better for them, but in the long run it really is!
  • We needed a process for communicating those changes: we published “milestone release notes” to the organisation every two weeks with the code changes.
  • We needed to standardise and document a milestone release and deployment process, which ideally as similar as possible to the full release process, but might take a few expedient shortcuts.
  • The software has to actually be ready to go each fortnight. This might need some badgering and nagging of the dev team to get them to wrap up their tasks a few days beforehand.
  • Lastly, we needed to assign a person responsible for doing the build and getting it live every two weeks. This role rotated among the developers in the team.

Our two-week iteration cycle served us extremely well in the Confluence team. We continued on this two-weekly rhythm of building and shipping milestones to our internal wiki and selected releases externally for testing for more than three years.

To be continued…

That’s it for today’s post. Next time, I’ll take a look at how we’ve attempted to decrease our iteration length further and what the results of that effort have been.

If you’d like to know when my next article is published, you can follow me (@mryall) on Twitter.

The road ahead: self-driving cars

One of the things that excites me most about the future of technology is the recent development in self-driving cars. Not only do I find the idea of being driven with no effort to wherever I like attractive, I believe computers will also be able to dramatically reduce the unnecessary loss of life caused by our vehicles every day. But, like with any new technology, there will be challenges with self-driving cars becoming more commonplace on our roads. The challenges I see with getting this technology to the mass market are both cultural and regulatory.

The first cultural stumbling block will be that appealing aspect of cars as portrayed by car advertisements: the sense of freedom and empowerment felt by the driver. The ability to move fast and unimpeded to wherever you fancy has spawned a worldwide culture of driving: from wedding cars to Winnebagos, we love to drive to wherever we’re going.

Will being driven around by our cars mean we lose this freedom? I don’t think it should. The machines will be able to take care of the boring bits, and humans can continue to do the fun stuff. With driving, the vast majority of it is the boring bits (contrary to what the car ads say). The computer will be able to take care of getting you from your home to your holiday destination, while you plan what you’re going to be doing with your time away. Once you get there, you can drive the car around the scenic coastal road if you like, just handing over to the computer when you want to sit back and enjoy the view.

The second cultural barrier will be the difficulty in believing that a machine could be a better driver than a human. Everyone knows someone who is a bad driver and “shouldn’t be on the road”, but rare is the person who would themselves admit to being a bad driver. You see, it’s everyone else’s bad driving that causes all the accidents. This over-inflation of one’s own abilities will apply equally to computer-based self-driving cars when they appear. Why would you let the computer drive, when you can do it fast, better or more fuel-efficiently?

Computers may well prove to be slow, cautious and fuel-inefficient drivers when they first take the wheel. But over time, computers will improve in all these areas while their human counterparts remain impaired by their sluggish nervous system, their susceptibility to tiredness or alcohol, and how easily they are distracted. As the computer systems in self-driving cars continue to improve, it will become increasingly hard to argue that a human behind the wheel is better than the computer in any significant way.

The other cultural problem I see relates to the inevitable accidents that will come with the initial rollout of self-driving cars. These cars will not be free from problems, and the first few major incidents involving loss of life will certainly be highly publicised. This situation will be difficult for everyone to deal with, and policymakers will need to take a long-term view and support the development of this technology which promises a huge increase in safety for everyone.

Legal issues around self-driving cars will be complicated and will need some effort from policymakers in order to help the technology develop while ensuring the safety of the public.

One things to note with the development of self-driving cars is that it’s essential to test out these cars on the public roads. Certainly there is a lot that can be done on a test track, but for ensuring the software can deal with the wide variety of situations that crop up on a real road, the computer will need to drive on our streets while a human monitors it and can take over if necessary.

Setting up a legal framework for on-road testing is the first step where lawmakers can help out. Computers which will be in control of a vehicle on public roads should have gone through a minimum level of testing – a “computer driving test” in other words. The organisations involved with the development of self-driving cars should be able to help set the parameters of such a driving test. Recognition of other vehicles and unexpected obstacles is important, as is obedience to road rules and traffic signals. All this should be tested and assured in a safe test environment before computers are allowed to drive vehicles on the road.

Nevada is the first government in the world to take some steps in this direction, passing legislation last year that allowed the Nevada Department of Motor Vehicles (NDMV) to issue licenses for autonomous vehicles once they pass an appropriate level of testing. The first license was granted last month to a self-driving car developed by Google.

While self-driving cars are in their infancy, there will be many situations that the computers are unable to handle. In these circumstances, a human must be able to take over. Ensuring all manufacturers conform to similar standards with their human override controls is another place where regulation will be important. A self-driving car should require a similar level of overrides to those provided by cruise control in current vehicles. A human touching the steering wheel or pressing any of the pedals should take immediately control away from the computer.

Lastly, and perhaps most importantly, the law will need to help determine responsibility for failures in the technology of self-driving cars. Like auto-pilot on planes, the software and hardware which is driving cars on our road will be responsible for ensuring the safety of millions of people every day. The people who write this software and hardware need to have an appropriate level of liability for accidents due to their mistakes or negligence. Determining what is within the bounds of reasonable care for this kind of system will take some discussion, both inside and outside the legal system, as self-driving cars evolve and come into mainstream use.

As the technical limitations are slowly overcome, self-driving cars look like they’ll be a large part of our lives by the end of this decade. Given our history of using such developments to our advantage, I’m optimistic about what such a future will bring. The cultural and legal hurdles to widespread deployment of this technology seems like they can be overcome given sufficient time and effort. The benefits should make all this more than worthwhile.

Read more about self-driving cars:

Comparing Instagram's growth with Facebook and Flickr

Instagram has been acquired by Facebook for $1 billion. Instagram CEO Kevin Systrom writes:

When Mike and I started Instagram nearly two years ago, we set out to change and improve the way the world communicates and shares. We’ve had an amazing time watching Instagram grow into a vibrant community of people from all around the globe. Today, we couldn’t be happier to announce that Instagram has agreed to be acquired by Facebook.

What’s perhaps more interesting is that Instagram is going to remain a brand and product separate to Facebook inside the Facebook organisation.

Mark Zuckerberg writes on his timeline:

We believe these are different experiences that complement each other. But in order to do this well, we need to be mindful about keeping and building on Instagram’s strengths and features rather than just trying to integrate everything into Facebook.

That’s why we’re committed to building and growing Instagram independently. Millions of people around the world love the Instagram app and the brand associated with it, and our goal is to help spread this app and brand to even more people.

We think the fact that Instagram is connected to other services beyond Facebook is an important part of the experience. We plan on keeping features like the ability to post to other social networks, the ability to not share your Instagrams on Facebook if you want, and the ability to have followers and follow people separately from your friends on Facebook.

And an interesting insight into how he views the acquisition as a unique opportunity:

This is an important milestone for Facebook because it’s the first time we’ve ever acquired a product and company with so many users. We don’t plan on doing many more of these, if any at all. But providing the best photo sharing experience is one reason why so many people love Facebook and we knew it would be worth bringing these two companies together.

I was gobsmacked that Mark Zuckerberg thinks Instagram has a lot of users. There must be a hell of a lot of users on Instagram. I decided to do a little more research.

Just under a month ago, Instagram announced that they had 27 million users with their iOS app. With the release of an Android app in the last week, they’ve seen more than 5 million downloads. In just two years since they launched, that shows phenomenal growth.

If we look at a comparison with Facebook and Flickr, this chart shows how impressive Instagram’s growth is:

Graph showing Instagram's growth as much faster than Facebook and Flickr's in the initial quarters after launch

What I find most surprising about Instagram is that they’ve achieved this growth solely in the mobile space. Their web presence is very limited, perhaps intentionally so. You can get a link to a photo page (example), but there is no way to browse all the photos by a person or navigate their network on the website. To do almost anything with the Instagram “social network”, you need to use the mobile app.

Is this a workable strategy for other mobile-focused apps? I think it probably is. Build a web presence that is just enough to get people to understand the purpose of your app and give them a reason to install it and try it out. This certainly hasn’t held back the growth of Instagram in any significant way.

I hope Facebook will take some cues from Instagram’s brilliant app design and start to simplify and streamline their mobile app. The current Facebook mobile experience is complicated and buggy. They could learn a lot from the Instagram guys.

Sources

Sources for the user data are as follows:

It’s an interesting coincidence that both Flickr and Facebook launched in February 2004. Thanks to Horace Dediu for providing the inspiration for this chart.

Portrait of Matt Ryall

About Matt

I’m a technology nerd, husband and father of two, living in beautiful Sydney, Australia.

My passion is building software products that make the world a better place. For the last 15 years, I’ve led product teams at Atlassian to create collaboration tools.

I'm also a startup advisor and investor, with an interest in advancing the Australian space industry. You can read more about my work on my LinkedIn profile.

To contact me, please send an email or reply on Twitter.