10 min read

on academic habits

```I liked Tanya's point about the difference in pace in industry vs. academia. A postdoc friend once got a (backhanded?) compliment applying to a startup like, "you're definitely an academic but at least seem to be able to ship (i.e. deliver results)". Has anyone gone through this pace transition and how did you learn to "ship" faster? Were there bad habits you had to unlearn from your academic training?``` —Christian Tai Udovicic

    This is a good question, Christian. I'll apologize in advance that I am going to almost completely fail to answer it directly. But I hope this will end up being a useful response.
    First of all, I reject the premise that your academic training has taught you "bad habits."[1] Perhaps, if you simply never finish anything—not a paper or conference poster or abstract, grant proposal, project of any kind—then I suppose, yes, that speaks to a deeper issue. But it is almost certainly not caused by your academic training because shipping those kinds of things is central to the training and job of an academic. A habit is only "bad" if it is not appropriate for the environment in which it is acted upon. By analogy: taking my shoes and socks off when I'm at home on my couch is perfectly acceptable, but doing the same in an airplane is absolutely beyond the pale. Context matters. A lot of "habits" that are desirable in academic contexts will be undesirable in corporate ones. And vice versa!
    It is therefore worth analyzing what distinguishes these two contexts—"academia" vs. "industry," both broadly construed—that generates disparate ideas of "good" and "bad" behaviors. As with so much in life, I think it comes down to priorities that flow down directly from incentives.[2] Given your technical background, I expect that you'll be ok with thinking about an optimization problem under a set of constraints, so let's frame it that way. What is being optimized for, and why? What are the constraints and priorities in academia vs. industry and how do those generate diverging worldviews that lead to comments like the one your friend encountered?
    I think that they break down roughly as follows:

  • In academia, you optimize for correctness and thoroughness. These are constrained to some extent by perfection on the upside and available time and money on the downside. Neither of these are strict constraints, because it is always possible to spend more money (if you can find it) to explore a problem more deeply. The best academics are those who do extremely deep and thorough work. In most academic contexts, the worst thing that you could possibly do is to ship something that is preventably wrong or sloppy. If you do this enough, you will probably no longer have a job.
  • In industry, you optimize for profit. Profit is constrained by revenue on the upside and costs on the downside, according to the axiomatic equation: PROFIT = REVENUE - COST.[3] The best businesspeople are those who make the most profit. In most business contexts, the worst thing that you could do is spend a lot of money without making enough in return to justify or cover those outlays. If you do this enough, you will probably no longer have job.

    Academic / research training and incentive structures do not pay tremendous respect to any concept of diminishing returns. You are fully incentivized to make your research effort as good as it could possibly be, and the only practical bounds on that are the money and time available to you. You are also—depending on your particular career stage and trajectory—incentivized to publish; but publications are rarely endpoints of a research effort and often merely checkpoints. Grant awards have limited terms, but are incredibly difficult to obtain, typically provide funding evenly over a period of years, and there is almost never a built-in incentive to finish sooner.[4] Scientists care so much about advancing their research (or “making discoveries” if you want to frame it that way), that we all know some who have retired from long and successful careers only to continue doing more research… when their time and money are both running out! The only thing that gets them to stop is death.[5] I'm not suggesting that this is bad; I also hope to die in office. The impulse is really good in context, and society should absolutely want and support some fraction of the population to just doggedly pursue answers to questions irrespective of some kind of crude and shortsighted measure of “efficiency” or “return on investment."[6]
    But in for-profit industry[7] contexts, diminishing returns matter very much. There are many things in a business that cost money, and almost always cost more money as a function of time. Buildings, insurance, marketing, etc. In the vast majority of businesses, the largest single cost is personnel. Your time is incredibly costly and incredibly valuable.[8]
    Until the project ships—as long as you are working on it and not shipping—it is exclusively a cost. Shipping a project transitions it from a phase in which it is purely an expense into the phase where it can start to accrue benefits for the company (meaning that it can start decreasing costs or increasing revenues).[9] The benefits of many projects tend to accrue over time, e.g. a certain number of dollars per day or a certain percentage growth or savings per year, etc. The project per se is not net positive until the cumulative benefit it has provided begins to exceed its cumulative costs, including the cost to get the project to shipped and the cost to maintain the project once it has shipped. Given this model, a few things will be immediately obvious:

  1. There is a point at which continuing to improve the project instead of shipping it means that the project has no possibility of achieving profitability. Its costs outweigh any possible future benefits in purely financial terms.
  2. Shipping a project sooner means that the total accrued benefit needed for the project to become profitable is smaller. The project can therefore become profitable sooner, for longer, and will probably be more profitable over its total lifetime.
  3. There is a point at which further refinements / improvements to a project do not improve the rate of accrued benefit enough to offset the cost of those refinements / improvements.

    Everything in a (real[10]) company should be done in service to the objective of increasing revenues or decreasing costs.[11] However, the anticipated benefit (aka profitability) of any given project is almost always just a guess. This can be a highly educated guess, based on substantial market research and industry experience, customer discovery, etc. But it is still only a hypothesis at best. The true benefits are always unknown until the project ships. It is almost only once you put a product in front of potential customers that you get real, unvarnished feedback about its quality and appeal as a product.[12] This basic truth is behind the practice of small, early-stage, and startup companies to "pivot" (i.e. iterate on product offerings until they find one that "fits" the market). The risk of new projects or products failing is one of the most significant sources of uncertainty and risk for any business. Indeed, I would assert that it is one of the reasons that large, established businesses tend to calcify and stop innovating like they did when they were small and scrappy and had a lot less to lose (unless they make specific efforts to combat this).
Consider this in the context of the model above. The costs are known and measurable right now. The benefits are at best a hypothesis. And the only way to start to constrain them is to ship. This leads to the conclusion:

4. Shipping quickly mitigates risk.[13]

    I think that “failure to ship” by people who are otherwise proven to be incredibly competent and capable of producing good and impactful work—which describes nearly everyone with an advanced degree that involves writing a thesis or dissertation—is almost always due to perfectionism at root. As previously asserted, "perfection" is the optimization target of academic outputs; it is therefore a system that is ideally suited to encouraging the worst impulses of people who already have perfectionist proclivities. Furthermore, one of the things that a higher education attempts to equip us with is an incredibly sharp critical tool kit; we learn how to be judgmental. Someone who has been through a Ph.D. program, especially, knows from the puddles of sweat, blood, and tears exactly all of the ways that something can suck. And they are the very top world experts at understanding and enumerating how their own work can suck. They also hold themselves and their work to incredibly high standards, and have been trained to do everything possible to avoid even the possibility of errors, or at least to fully and thoroughly document where uncaught errors might be hiding.
    The frustrating thing is that perfectionists are also mitigating risk by not shipping. If nothing is ever delivered, then nothing ever fails to live up to our expectations. The reputational damage of imperfect work product is just weighted, psychologically, far more heavily than any risk of not delivering anything at all. This isn't ironclad logic, but the part of a person… the part of me that knows how good my work could be if only I personally didn't suck so fucking much… doesn't traffic in logic. So it is not—as the startup bro in your question may have believed—that your friend and other "academics" suffer from some horrible personality flaw that makes them allergic to getting things done. But also, to some extent, people who go into academia have self-selected for personal qualities that suit academic priorities quite well, and academic systems have further trained and reinforced those.[14]
    Academics get things done all the time, truly incredible things of unimaginable scope. Good academics are simultaneously hyper-focused specialists and also adaptable generalists.[15] Such people should be highly sought after and highly compensated by profit-driven organizations, but they will need to refocus their efforts toward a new set of priorities and constraints, which I believe they are capable of doing. Many have, but learned it by fire. But it would rarely occur to the entrepreneur to explain a profit motive to the academic, nor for the academic to explain the primacy of thoroughness and correctness to the entrepreneur; to each group, these concepts are as foundational as the ground. I think a lot of grief, misunderstanding, and frustration (sometimes veering into hostility) between "academia" and "industry" can be easily avoided by just recognizing that different things are being optimized for in different contexts, and that it's fine.

    I hope this is helpful.


  1. To be honest, my real "first of all" is that I expect whoever said this to your friend is provincial ass and will probably be a bad boss and / or coworker, and you should avoid working with such people if at all possible. The company does not have a good work culture if anyone's impulse is to welcome you with some version of "but you're one of the good ones." ↩︎

  2. You see, Bobs, it's not that I'm lazy. It's that I just don't care. ↩︎

  3. I understand that to many academics, this one statement is straightforwardly negative, and that it might as well read PROFIT = EVIL. Pretend for now that this is as neutral as any other mathematical relationship, like F=ma or PV=nRT. ↩︎

  4. Because having grant support is just about the closest thing to any kind of job security that any academic could ever hope to enjoy and, conversely, not having grant support is equivalent to not having a job at all… there are in fact strong incentives to take more time to complete grant-funded work. ↩︎

  5. Perhaps somewhat sadly, many try to power through severe cognitive decline, contributing to the "crazy old professor" stereotype. And I don't blame them. ↩︎

  6. An important reason for this is that we know that research outcomes sometimes suddenly become incredibly fruitful and impactful after long, slow burns. We call such events “breakthroughs” and they can shift civilizations. Importantly, we don’t know when and where breakthroughs will occur; almost by definition, we cannot, or else we would already have all of the information that in-and-of-itself constituted the breakthrough. A lot has been said about this, and a lot more should be said about it; but not right here. ↩︎

  7. Every normative statement that I make should carry the caveats of “generally” and “YMMV,” of course. There are lots of factors that might distort the basic premise here. But one must start from a basic premise. ↩︎

  8. Contrast this with how you were almost certainly taught to think of your time in academic / research contexts, as basically worthless. Certainly not something you put a dollar value on. That would be crass. ↩︎

  9. It is so important to ship a project as soon as possible that there is a standard term for the optimal point in development to do so: the Minimum Viable Product (MVP). Briefly, the MVP is the least developed version of an output that provides any net benefit at all. ↩︎

  10. A lot of entities in the world look superficially like real companies, but they are not. An important characteristic of a real company is that it is intrinsically guided by the axiomatic statement, PROFIT = REVENUE - COST. Organizations optimized for for amusement of the founders / funders, as an abstract financial instrument (including many VC-backed efforts), or to commit fraud… these are probably not real companies. ↩︎

  11. This includes protecting revenues and costs (real or potential), which is to say managing risk. This is why many things that companies do might not have a straight-line impact on the profit equation. If there is a 1% chance of an event that increases costs by $1M, then mitigating that risk is in fact a cost-cutting measure. Maybe in the unique case, then the E.V. of ~$10k seems like the "correct" answer for "budgeting" this risk. But there is an infinite set [A0,A1, … An] of risks, only a small fraction of which can even be clearly defined, and money is always finite. What amount of money should be spent to mitigate any particular risk vs. engaging in other activities is a judgement call, and typically making that judgement (and being held responsible for it) is one of the most important jobs of a CEO. ↩︎

  12. Through the only metric that really matters: will people pay for it? ↩︎

  13. Again, in general, and I know you’re already thinking of one of the most important exceptions to this conclusion. Which is that if you ship total shit then it generates a different kind of risk. Namely, it can damage your brand, reputation, client relationships, etc. And missteps of this type have killed goliaths. The risk of this must be balanced against the risk of not shipping at all. The ultimate judgement for how to balance these risks—and ideally the ultimate blame—should again lie within the C-suite of the company. (How that can go wrong, e.g. when the CEO isn’t getting accurate information from project leaders, or when they have perverse incentives is… beyond the scope of this document.) ↩︎

  14. Many people on "the industry side" have parallel combinations of self-selection and reinforcement learning. Don't let them claim otherwise. But this document is aimed at academics. ↩︎

  15. Do not let anyone pigeonhole you as just a domain specialist. The era of the hyper-specialist research scientists with no other competencies is long over (if it ever truly existed). Hypothetical strawman example: Professor A is the world's top expert on the mating behavior of one species of Ecuadorian spider… also has well-demonstrated expertise in applied statistics, programming, computer vision (to process videos of spiders mating), logistics (to organize fieldwork), management (over a lab of students and post docs), and administration / bookkeeping / compliance (for grant administration). In a normal mid-sized to large business, these would all be entirely separate jobs done by individual specialists in each task. Professor A is also bilingual in English and Spanish. I'm sure you can quickly brainstorm other essential skills that such a person would most likely possess. ↩︎