7,420 miles

My trusty shoes

I have a lot of science-focused posts writing themselves in my brain (the push for open data, science you can do with a smartphone, the “should I join a startup” series…. they sound pretty interesting, right?) I haven’t had much time to put pen to paper these last few weeks. But in a lot of ways, this post is about science. You’ll see.

Not unlike a lot of other driven, hard-working, slightly Type-A people, I have a pretty intense hobby: Barefoot running.

I began running during my second year of grad school as a stress-management technique (thanks to some encouragement from a very dear friend). He actually baited me by promising a big slice of cheesecake when I was able to hit 4 miles, aka, the “campus drive loop” around Stanford. So really, this obsession began with cheesecake.

Initially, running was our excuse to get out of lab so we could actually enjoy some California weather. We were in the habit of getting in early and leaving late, so we were a bit vitamin-D deprived. There was a functional (albeit slightly scary) shower in our building, so I’d get in around 9 or 10 a.m., go out for a run around 4 or 5 p.m., take a quick shower, eat dinner, then go back work for another few hours. Rinse, repeat.

I started running in normal shoes (arch support, higher padded heel, etc). I enjoyed my little afternoon excursion, but I could never quite get past that 4-mile mark and got tired of “having” to buy new kicks every 6 months (the arch would fall, the padding would wear, etc). I found an old pair of track-style shoes in my closet (thin tread, no heel, no arch support, bonus: bright pink and blue 80s colors) and started running in those. It took a few weeks, but I remember for the first time getting that feeling of “I’m not ready to stop” when I saw my finish line. That, my friends, was the beginning of a deep dive into long distance running. And what a glorious ride it has been.

Around that time, I switched running partners for another grad school buddy of mine. We would hit the pavement and talk about our research, mostly venting frustrations about difficult minutia we were troubleshooting, the concerning habits of our labmates, and how little impact our work would have in the long term. It was as much physical therapy as it was mental. Though I didn’t realize it at the time, expressing opinions on my work in a “judgement-free zone” built the framework of my worldview on research. Its values, its pitfalls, and where and how I fit into its structure. Ultimately, this worldview led to me leave grad school to join a startup created by that same running buddy.

Those little track shoes got me through my first half-marathon. (As you can see, I kind of destroyed them).

Track shoes

A few months later, I switched into Vibrams (this would be about 2 and a half years ago now), and I’m still wearing the same pair. Those shoes took me so many places. I ran up mountains, to the ocean, through redwoods, around islands, in the desert, through wine country (multiple times), and even recently through some snow and ice (thanks, winter).

They helped me through a major life transition from grad school to a startup, a shoulder surgery, 80-hour workweeks, an almost-completed marathon training, and a full running-form rebuild when my marathon training failed. Then they saw me through a cross-country move and an intense job hunt. Now they are seeing me through my next professional step with a digital health startup.

I guess what I’m trying to say today (which happens to be International Barefoot Running Day) is that this hobby made me a better person and a better scientist. I hope you’ve got something that provides as much physical and mental benefit to you as well.

And with that (you guessed it) I’m going out for a run.

 

Advertisements

The most frustrating (and least publicized) thing about science

Photo by David A. LaSpina, JapanDave.com

Photo by David A. LaSpina, JapanDave.com

A close friend suggested I read Zen and the Art of Motorcycle Maintenance. In a book about a cross-country journey, mental illness and self-discovery, I was surprised to find an exquisite description of the most common zemblanity of science.

For those of you who aren’t familiar with the term ‘zemblanity’, consider this your word-of-the-day:

“So what is the opposite of Serendip, a southern land of spice and warmth, lush greenery and hummingbirds, seawashed, sunbasted? Think of another world in the far north, barren, icebound, cold, a world of flint and stone. Call it Zembla. Ergo: zemblanity, the opposite of serendipity, the faculty of making unhappy, unlucky and expected discoveries by design. Serendipity and zemblanity: the twin poles of the axis around which we revolve.”

– Armadillo by William Boyd

The book Zen is a first-person narrative. The narrator begins describing the life of Phaedrus, a highly intelligent man who began college at the age of 15 studying biochemistry and molecular biology. We discover that Phaedrus is actually the narrator himself, before a severe mental break from reality and a subsequent electroconvulsive shock therapy treatment. This procedure so altered his personality and brain structure that Phaedrus is, in fact, an entirely separate person. Other than brief flashes of memory, the narrator discovers Phaedrus almost as you would discover any stranger – by what they leave behind. Thankfully for the narrator, Phaedrus was a prolific writer.

During his studies in college, Phaedrus began to think about the scientific method. This dogma instructs us to form a hypothesis, create experiment(s) to test said hypothesis, and then make an evaluation based on the experiments. If planned and executed correctly, the hypothesis should be proven true or false. In other words, you could say this series of steps is meant to scientifically determine truth.

But as Phaedrus continued his philosophical evaluation, focusing specifically on hypothesis generation, he realized something.

“As he was testing hypothesis number one by experimental method a flood of other hypotheses would come to mind, and as he was testing these, some more came to mind, and as he was testing these, still more came to mind until it became painfully evident that as he continued testing hypotheses and eliminating them or confirming them their number did not decrease. It actually increased as he went along.”

At first, this was an amusing thought. He coined the law: “The number of rational hypotheses that can explain any given phenomenon is infinite”. He even found it helpful during times of scientific frustration:

“Even when his experimental work seemed dead-end in every conceivable way, he knew that if he just sat down and muddled about it long enough, sure enough, another hypothesis would come along. And it always did.”

I think any scientist doing independent, discovery-based work can empathize with that situation. It’s the thing that keeps you going when you’ve hit your head against the same wall for weeks or months. It’s anti-boring. Science is discovery focused, and there’s always a new detail to uncover – no matter how small.

But if you think about this situation in another light – really think about it, as Phaedrus did – doesn’t this feel a bit… unproductive? You begin with a problem – a real, tangible problem that you are going to solve. After 6 months, or a year, or two years, you find yourself describing a particular nuance in so much detail that the original problem isn’t even mentioned.

You start with an elevator pitch that anyone could relate to, such as, “I’m going to determine why Cancer Type X responds to Therapeutic A, but Cancer Type Y does not.” But you end up describing something entirely different, like how the sensitivity setting of a particular instrument affects the determination of what’s-it in the whatchamacallit method.

Unfortunately, Phaedrus couldn’t reconcile his discovery with the purported purpose of science.

“If the purpose of the scientific method is to select from among a multitude of hypotheses, and if the number of hypotheses grows faster than the experimental method can handle, then it is clear that all hypotheses can never be tested. If all hypotheses cannot be tested, then the results of any experiment are inconclusive and the entire scientific method falls short of its goal of establishing proven knowledge.”

And this wasn’t the only thing that shook him. He realized that not only was the method itself flawed, the result of the method was also flawed. Instead of determining an unshakeable truth, what is considered “truth” or “fact” is simply the most superior analysis of the time. This was similarly paraphrased by Einstein as:

“Evolution has shown that at any given moment out of all conceivable constructions a single one has always proved itself absolutely superior to the rest.”

So truth was in fact dependent upon time.

“Some scientific truths seem to last for centuries, others for less than a year. Scientific truth was not dogma, good for eternity, but a temporal quantitative entity that could be studied like anything else.”

This is a bit surprising at first, but in a split second you realize that of course this is true. Our understanding of a situation is constantly updated with the presence of new knowledge. Phaedrus eventually determined that “the predicted results of scientific inquiry and the actual results of scientific inquiry are diametrically opposed here, and no one seems to pay too much attention to the fact.” Hence, this is the biggest zemblanity of science.

I’ll stop us here, rather than continue down the rabbit hole of Phaedrus’ analysis. Poor Phaedrus did not take this well. Believing now his effort in the sciences to be entirely futile, and science to be the major producer of multiple, indeterminate, and relative truths in the world, he simply quit. At the age of 17, he was expelled from the University for failing grades. After a series of other events, we eventually find Phaedrus back in an academic setting, but instead of studying science, he studies philosophy.

I think Phaedrus’ realizations resonate with me – and perhaps it resonates with you as well – because I believe I have been where Phaedrus is. I have realized how futile science can feel. How you feel like you are digging an increasingly faceted hole rather than a path forward. Most scientists are naturally analytical people, and may get into science because they are seeking a world where they can determine black and white truths. But instead, they are (sometimes harshly) confronted with the grayscale reality.

For most, this is simply a process of maturation. You adapt. I can distinctly remember when my worldview shifted into the gray and how it deeply impacted my personality and outlook. It was a watershed moment for me. But some can’t reconcile this realization, and instead find something else to do, like Phaedrus.

This raises an interesting question: If I agree with Phaedrus’ statements, which I do (mostly), why am I still a scientist? Why is anyone?

Ultimately, I think it’s a distinction in what you believe science produces. I believe science produces knowledge, not truth. Phaedrus eventually sought truth elsewhere, in philosophy. Though I enjoy philosophical whimsy now and then, I personally do not find truth in philosophy either. I find a dizzying spectacle of thought dissection (similar to the hypothesis conundrum described earlier) that leaves me with more questions than answers. But unlike with science, I don’t get the same satisfaction at the end of the process.

I think that’s a difference too – we’re all seeking answers in our work. You may or may not find them. But you find a situation where that process still fulfills you. I went through a “Phaedrus” moment when my science produced results I did not find valuable. I think this is another common situation, one that is often mistaken for an existential crisis (“I am not meant to be a scientist”) and leads to many talented thinkers, like our semi-fictional Phaedrus, to quit. Instead of quitting, I instead found a value-matched environment.

Since coming to terms with Phaedrus’ conclusions seems (to me) to be a common philosophical process for scientists, but also seems to be one of the least-advertised elements of the field, I think we should be more open about the realities of scientific work. Zen was published in 1974, but I thought his dissection of the scientific process is just as relevant today. This means we should be better educating scientists and non-scientists about how science actually works and what it produces at the end. We should value the process of knowledge building instead of just chasing the next PR headline.

I hope you’ve enjoyed this little philosophical foray. It certainly made me reflect on my evolution as a scientist. As a scientist, what other watershed moments have you experienced that aren’t advertised as part of the process?

Reblog: Why you shouldn’t decide anything important at your board meeting

This is a great post about how to prepare when getting a group to make a consensus. The official meeting shouldn’t be the first time you pose an important (potentially game-changing) question, especially one that you are heavily invested in. Though written specifically for entrepreneurs regarding board meetings, I think it’s good life advice. And for you scientists with the commonly-dreaded committee meeting coming up: I think it’s worth a read by you too.

Check it out here:

http://techcrunch.com/2014/03/19/why-you-shouldnt-decide-anything-important-at-your-board-meeting/

ScaaS: Science as a service (a research revolution)

CloudLab

Greetings, my fellow colleagues of tedium. “Give me your tired, your poor, your huddled masses yearning to breathe free …” Welcome to the new world of research: ScaaS.

It’s time to bring us out of the dark ages. No more old PCs running Windows 2000. No more equipment still using floppy drives. No more carpal tunnel syndrome from repetitive tasks a robot could (and SHOULD) do. It’s not just about our quality of life – it’s about changing the devastating trend of costly R&D.

I’m happy to say that the future is here – or, at least, we’re on the brink of it.  I’m calling it science as a service. It’s like software as a service, but for the research industry. Even if you aren’t familiar with the term SaaS, you’ve probably used an implementation of it. SaaS-based products host their software and your associated data on the cloud, and you interact with it via a simple web browser interface.

Now consider using that model for your favorite scientific experiment:

Instead of purchasing the hardware ($20k – $100k, or more), dealing with the software, maintaining the instrument, and doing the experiment by hand, you pop open your favorite web browser. You select the experiment and direct every detail by specifying a series of options (cell type, temperature, internal standard, instrument settings, etc. – if it can be altered, there should be an option for it). Maybe you also select parameters for how your data should be analyzed, how many times it should be repeated, or you select a desired completion date. Click, click, click, and your little experiment is on its way.* And you are on to bigger and better things.

Sounds great, right? Other people think so too. There are already a few names in this field. You should check out this great talk about how Emerald Therapeutics’ Symbolic Laboratory creates a construct for lean research (and how “lean research” could no longer be an oxymoron). TechCrunch blogged recently about Transcriptic and Benchling, and companies like Synthego, Gen9, and Gingko Bioworks are making headway too. 

So what’s keeping this from being immediately adopted in every lab in the country?

First, it’s probably because most experiments are not done in an automated fashion. If you go through a web interface to order an experiment, but then a human in a CRO does it for you, this doesn’t help much. It may save you some time, but it’s not a scalable or cost-efficient model. But just because these experiments aren’t normally done in an automated way doesn’t mean they can’t be done in an automated way. Most people still prefer grad students as a cheap form of labor (students making just over minimum wage, in fact), even though an automated instrument is more cost-effective in the long run.

But there’s another problem. It’s a mindset. Let’s face it: We scientists can be greedy. We just don’t want an experiment to be out of our hands. There is a biased attitude of trust that if you do it yourself, it’s “done right”. But the do-it-yourself model hasn’t worked out so well for us in terms of reproducibility. I’m not saying that you shouldn’t be concerned about handing over your experiments, but if a robot is doing the work you can take solace in that it will do what it’s programmed to do.** Robots don’t make complex errors like forgetting one element of a buffer recipe because it hasn’t had its morning coffee. The errors are standardized and if a major problem occurs, the robot stops entirely.

Lastly, scientists need to be more demanding about data. We need to make a priority of gathering it, storing it, and sharing it. In order to trust an experiment to a ScaaS system, the user needs to get back all the data (raw data, meta data, instrument files) they can get their hands on. Not just for the experiment at hand, but for the controls too. And not just the control run before their experiment – every control run. Ever. That should all be open-access. That way a user could investigate global changes in the behavior of the instrument, not just see an isolated period in time when their experiment was run. I would even suggest providing video records of the experiment in progress. (If you can afford a webcam to watch your kitty sleep all day, then a ScaaS center can afford them to watch their robots.) 

In conclusion – the solution is out there. But to adopt this system, we have to change the way we do science. We need to start integrating automation on all levels, incorporating computer science into the scientific way of life, and most of all, becoming gluttons for data.

* There is, of course, a potential problem with this. I’m not suggesting that only one powerhouse should dominate the market for a particular experiment. It would disastrous to find out later they’d done something wrong – remember the fiasco when we found out the major breast cancer cell line MDA-MB-435 was actually a melanoma cell line? I don’t think we need to get ourselves in that potential situation. I think competition within the private sector can do what it does best – have companies compete until a few “gold standard” options exist that are well-validated and trusted.

** The old adage “garbage in, garbage out” applies here. To properly program a robot, you have to have a unique blend of science and computer science savvy. I’m privileged to know some of these gurus, and in the right hands, this works.

How to buy new scientific equipment

Image

An acquaintance of mine started working in a new lab, and they have a beautiful, expensive, nearly brand-new liquid handler*. It’s currently unused and gathering dust. The technique they bought it to perform (which has to be laborious to do by hand, or they wouldn’t have purchased it in the first place) is currently being done. By hand.

*A liquid handler is a robot that is programmable to, literally, handle liquid. You can load it up with plates, tubes, reservoirs, petri dishes… whatever you’d like, and it will transfer liquid for you from one place to another. If you’d like your mind blown, check out my current favorite at HamiltonRobotics.com. Check out their youtube videos. (I’m not being compensated or anything to say this – it really is my favorite.)

The saddest part is that I’ve heard this song and dance before. A lab finally convinces their PI/Director/Money-Giver to purchase an automated system, and it sits there unused because… why? Most of the time only one person in the lab knows how to use it. Perhaps it worked for a while and then broke and you can’t get the supplier to fix it. It’s usually never used to its full potential, and in the meantime, people are still getting carpal tunnel from repetitive pipetting.

This post is very nuts and bolts, but I can’t help myself. I’ve decided to write a tutorial on how to buy a new piece of scientific equipment. Do I have credentials to speak on this topic? Yes. Have I been employed to sell scientific equipment for the last 10 years? No my friends. My education came from the streets. When you are put in charge of buying HPLCs, liquid handlers, peptide synthesizers, plate readers, mass spectrometers, and other things I can’t even remember, you learn a thing or two.

So here we go – these are my super-duper, number one, must-follow commandments to buying a piece of equipment. Share widely!!

1. Demo. Demo, demo, demo, demo, demo. Seriously. Do not purchase without a demo. Most manufacturers will offer to bring it on-site, but they’ll want to babysit you through one experiment and then leave. Ask them to leave it with you for a few days so you can find the bugs on your own (they are very good at avoiding them with a hand-held demo).

If they won’t bring it on-site, go to them if you can (but first, be wary – why do they want you to buy something without testing it?)

If they won’t bring it to you, and you can’t go visit them, but you absolutely have to have it, then write a conditional statement into your purchase agreement that allows for a trial period. Most manufacturers work on at least a net-30 basis if your lab has any credit (meaning you don’t have to pay until 30 days after it arrives). See if you can negotiate a 50% down net-30, and then 50% later (or even better terms). That way if it breaks or isn’t working properly, you still have leverage so they’ll pay attention to you and fix it.

2. Pretend you are dumb. I can’t tell you how many pieces of equipment could not actually perform the function they were built to perform. When they show you how it works, ask what every button means, and ask why they are using the software in a particular way. This is how you find the bugs. Let’s not live in a fairy tale here – there will be bugs. The sooner you find them, the sooner you can tell if you can live with them. Try to think about any variable you’d want to change during an experiment and see if the equipment can handle it. Be annoying, but in a charming way. Buy them a cup of coffee and express excitement for the product (while asking them every question you can think of).

3. Purchase equipment that exports raw data. For those of you who already analyze your own data – bravo, this is clearly a must for you. For those of you who don’t – why not? Are you sure you won’t ever need the raw data… ever? Even if you only use their analysis software, can you guarantee that the next software upgrade won’t change something that you can’t control? Most equipment will export raw data into a simple file, like a .txt, .csv, or .xml. You can probably find a way to work with whatever form it’s exported in (as long as it doesn’t come out in binary).

4. Modify your purchase agreement. If you were buying a new car, would you take the initial offer? Oh heck no. You’d negotiate. Why? Because you are a smart person, and you know the only time you have leverage is before the purchase is made. In this industry, most of the profits are made from the service contracts, not the equipment itself. That means they get more money from servicing their equipment than selling it. Don’t just sign what’s offered. Make a list of exactly what the equipment needs to do for you:

  • How often does it need to function?
  • What accuracy is required? (Be specific! Also, make sure you can test and confirm any of these numbers, and offer to share the data with them)
  • What does the software need to do?
  • What happens if the software upgrades? (Doesn’t it still need to do the things above? Why yes, yes it does.)

Include conditionals so that it has to do those things or you get to return it. You can ask your lawyer friends for some good legal terminology to use here.

Be prepared because they are going to push back a little on this. They will say they can’t guarantee their instrument will function perfectly all of the time. “Of course” – you say in a conciliatory tone, because you are buddies – “we’re in this together”. Ask them what they can guarantee. Make it clear that time lost is money lost. If it’s broken, how quickly can they fix it so it meets the previous expectations?

If you are going to hold them to a high standard, put yourself there as well. Get as much data as possible from your controls and experiments so you can show them when it works and when it doesn’t. Be meticulous and offer to share that data with them if you can.

5. Make them invested in you. Try to make them care about their instrument’s success in your lab. Perhaps you have friends who will all want to buy their product if it works well. Maybe you can use their equipment for a new application – this means more $$$ for them, and you can offer to work with a product manager to write a new application note. Or maybe you can just get them to care about you and your work, and the negative impact that’s made when their equipment doesn’t function properly. This is why you have to be friendly and charming through this whole process – this is a dual investment, and they should want to work with you.

Hungry for more? If you’ve read all this and want to keep going, then we should be friends. Really. See my super pro tips below:

Super pro tip #1: Buy your own computer. If the equipment needs to use a computer, purchase it yourself (or better yet – build it). Ask what types of connections are required to hook up to the instrument (Serial port? Ethernet? PCI card?). This way you are in control if the computer breaks down, you can get a decent model to your specifications, and you’ll save some moolah. Otherwise they’ll charge you $3000 for a crappy Dell laptop. (No offense to Dell, but seriously).

Super pro tip #2: Ask about software compatibilities. If it only runs on Windows XP, be afraid. Be very afraid. If getting stuck with a $3k Dell laptop is bad, getting stuck with one only running XP for the next 5 years is worse. This will help you gauge how much they care about their software. In my experience, most instrumentation companies are made of hardware people, not software people. The hardware can function beautifully but bad software will screw it all up.

That’s it! If you have any additional tips or comments, let’s hear ’em. We should be fighting to make research better, not putting up with the same problems over and over again.