Monday, September 24, 2007

Biologics Biopharmaceuticals and Protein Therapeutics Part I: The state of the Sector

Before moving on to the next series of articles, I'd like to invest some effort into researching and demystifying the convoluted topic of protein based therapeutics. Indicative of this sector is the somewhat interchangeable nomenclature. Protein therapeutics (e.g. antibodies, insulin, erythropoiten) can also be referred to as biopharmaceuticals, (which can also include nucleic acids). Biopharmaceuticals in turn are often described as biologics, a term which encompasses any medicinal product derived or constructed from living tissues or cells.

Anyone interested in the biotechnology or the pharmaceutical sector is sure to understand the ongoing hype surrounding biologics. Bristol Meyers Squibb's recent purchase of Adnexus for $430 million is the most recent example. Last month Pfizer announced that it is breaking ground on a $50 million biologics facility and simultaneously paid out $30 million to use Xoma's (XOMA) bacterial cell expression technology, aiming to have 20 per cent of its pipeline product portfolio in this sector by 2009. Merck shelled out $400 million for Glycofi last year, in attempt to catch up with the big pharma early adopters of biologics (eg. Roche, J&J, GSK, Astrazenca). Needless to say that examples of vigorous froth in the biologic arena are easy to come by.

For an in depth understanding of the state of the pharmaceutical corporations, I highly recommend the recent report published by Price Waterhouse Coopers: Pharma 2020: The vision. In short, R&D expenditures have risen steadily while the amount of new molecular entities (NMEs) approved have slumped. Only a minority of Pharma companies earn a substantial income from new products, and a majority of the leading firms stand to lose anywhere up to 40% of total revenue due to patent expiries. Contrasted with the fact that there are in fact growing national and global opportunities for the healthcare and medicine industries, the bottom line amounts to big pharma shifting into crisis mode.

The above scenario, currently playing out in slow motion, serves as a kind of vindication for the biotechnology field, which since its beginnings in the early 80s has generally employed comparatively more nimble, scientifically driven approach to medicine. The business model of innovating biomolecule therapeutics and treating smaller markets of unmet medical needs was originally dismissed by big pharma in favor of a small molecule blockbuster approach. In the past few years however, as niche market drug discovery has become increasingly necessary, unmet disease therapeutics have proven valuable both monetarily and strategically. In addition, biomolecules have become blockbusters themselves, accounting for nearly one quarter of this year's total pharmaceutical market sales growth. This apparent role reversal, and the stong market projections for protein therapeutics are no doubt the reason behind the recent biotechnology wave of mergers and acquisitions.

Unfortunately, the biotechnology drugmakers will have little time to rest on their laurels. Until now, these companies have not had to worry about competition from generic manufacturers for a number of reasons. Simply put, proteins are larger and more complex by orders of magnitude, and producing these biotherapeutics requires living organisms and detailed purification protocols. Likewise, proving that two biological macromolecules are identical in structure and efficacy without performing the actual clinical trials is no small feat. Ultimately the Hatch-Waxman Act, which regulates generic pharmaceuticals, breaks down. In a few months however, its all about to change, the new regulatory legislation is currently in congress and generics firms are eagerly anticipating the go-ahead for biologic follow-ons in the US market.

Ok that pretty much brings us up to speed with the state of things. In part II, I'd like use this background knowledge of the protein therapeutics industry to evaluate opportunities of technology and in the market.

Monday, September 17, 2007

Interview with Helicos COO Steve Lombardi

Following my Helicos part III post last month, I was contacted by Mr. Lombardi's office in regard to setting up a conference call with the Executive Vice President in order to answer my remaining questions concerning the Heliscope technology, and also to talk more about the future of Helicos. Needless to say I was delighted at this opportunity, and what follows is the transcript of our conversation.

Background on Steve Lombardi :
Helicos Executive Vice President and Chief Operating Officer

Mr. Lombardi joined Helicos in June 2006. He has over 27 years of commercial biotechnology experience, as a researcher and in various business management and executive positions. Prior to joining Helicos, he was Senior Vice President at Affymetrix, serving in executive positions in Corporate Development, Product Development and Research and Corporate Marketing. Before Affymetrix, Mr. Lombardi worked for 16 years at Applied Biosystems in various business roles, first as a marketing manager and later as a senior executive.

From 1989 to 1998, Mr. Lombardi led the formation of the company's DNA sequencing and genetic analysis business, the products of which formed the technological basis of the worldwide Human Genome Project. He was also involved in the formation of Celera within the broader Applera corporate structure. Prior to joining Applied Biosystems, Mr. Lombardi spent 8+ years as a nucleic acids chemist focused on the development of novel approaches to DNA synthesis. He earned his BA in Biology from Merrimack College.

SL: As a startup technology doing first generation of a first generation of a disruptive technology like single molecule sequencing, there’s a huge amount of IP that a company like us can accumulate and as such want to hold that very close to our vest for patenting reasons but also for not giving competitors a sense of what you’re really trying to do, so you’re absolutely right in what you’ve said but we’re just in the process of going commercial and you’ll start to see more and more from us. We just went live with our new website that is still a bit shallow in content but you’ll see a lot more from us as we begin to roll out the commercial launch.

AW: Ok that’s great and like I say I’m sure this has got to be a very active and exciting time for all you guys involved with Helicos. It definitely seems to be ramping up as you’ve just described

SL: Yeah it’s a fun time.

AW: So as you may or may not know I’m actually in the structural biology department here at NYU and so I have a vested interest in single molecule technologies and research. Many of us structural biologists try to keep up to date with single molecule technology and that’s part of the driving reason why I wrote my review, but so let me just jump into some questions I had first and get those out of the way.

AW: So I wrote my review of the technology behind the HeliScope using Dr Quake’s original PNAS paper along with the patents and all the other publicly available information but there are still a number of questions that I iterated in my article concerning the tSMS technology. Do you mind if I try to clarify some of those technical aspects of the HeliScope with you?

SL: Not at all and that’s one of the things that I wanted to do was to help clarify that for you.

AW: In the PNAS paper the Quake group was able to achieve the resolution and sensitivity needed by combining TIRM and FRET does the HeliScope still use both of these methods?

SL: The HeliScope still uses what’s called TIRF, “total internal reflection fluorescence”, but we’ve moved away from FRET. We found we didn’t need that modality; by using the right dyes we are able to get signal to noise with the TIRF system and single dyes. What we’ve learned is that with TIRF and enough light, and we put a lot of laser flux into the system, we can get substantial signal. However, the whole key here is controlling noise. First of all, TIRF optically reduces the noise substantially. Additionally, one of our founding scientists is Tim Harris, who was at Bell labs for many years back in Princeton; he was the first researcher to publish a paper on detection of single DNA molecules on a surface. He’s got a lot of experience and brought a knowledge base that has allowed us to build a flow cell surface that is incredibly low noise in itself, but is very washable

AW: Ok that was going to be my next question, so does it require excess washing steps to remove nonspecifically bound nucleotides?

SL: Oh yes, a majority of our cycle time is washing. We have a spec for florescent background, but consider the density we’re talking about. We have one DNA molecule per square micron. That means high density with regards to information content, but it’s very low density at the molecular level. One of the neat things about this technology is that, because detecting single molecules is not the problem, the advantage is that at every step in the process, we are detecting the emission of a single fluorescent dye. So during 30 cycles of single base addition, whether at step 1, or step 120, the signal to noise is constant because we’re always measuring a binary event; is the dye present or not? The huge advantage against the amplified, or think of them as “ensemble technologies”, is that even though you lose yield in a single molecule approach, you literally lose the signal in an ensemble approach. In the ensemble approach, you may start with for instance one million molecules but then over time you get degradation of the signal, you get less and less molecules reporting.

AW: That’s really amazing, by also cutting out FRET you guys have really done away, I’m sure, with a lot of logistical issues of using it.

SL: Absolutely. The other thing we don’t have to deal with are the phasing issues. You get a lot of signal in an ensemble approach, but you also generate signal coming from the phased fluorophores, the nucleotides at n-1, n-2, n-3…. that are generating noise. Many cycle chemistries like DNA synthesis or protein sequencing run into these phasing and yield problems. We are measuring binary events, starting at one molecule per square micron but having 3 billion strands down on our system, even a 10% yield of molecules reporting out to readable, alignable lengths, that’s still 300 million strands at 25 bases, or 7.5 billion strands per run.

AW: That really has clarified a lot up for me. So let me move on and ask you another question. How does the HeliScope deal with long homopolymer regions?

SL: So you accurately described what we announced at Marco Island earlier this year. We have invented unique molecules, whose composition is still proprietary because we believe that there’s a huge amount of intellectual property in this, called virtual terminators. The other sequencing by synthesis technology out there that uses a polymerase and a fluorescent nucleotide instead of the pyrophosphatase approach, the Solexa technology, have a 3’ blocked nucleotide.

AW: Yes I read about that, you can cap it but then there are other issues with uncapping.

SL: Bingo, so what we have done is to create what’s called a virtual terminator that adds no time to the cycle and kinetically inhibits the addition of the second base.

AW: So that’s what I thought, after reading the patent it seemed like it was largely done through kinetic means which has a lot of benefits especially because even if an incorporation is not made it doesn’t necessarily matter for the HeliScope system.

SL: Exactly, because we can take advantage of the fact that each strand grows asynchronously. You can’t have any asynchrony in an ensemble approach, so you have to drive all your kinetics to 100%. We don’t have that problem so we can take advantage of kinetics with these virtual terminators to adjust the incorporation of the first base and the second base such that in the cycle time that we use, we have virtually no second base addition. We are still doing research as to how far we can go with the homopolymer regions and I can’t say where we are right now, but were making very good progress towards performance that its really exciting.

AW: I’m sure, it’s good to know that at least some of my assumptions were correct.

SL: I want to give somebody credit here, and its Bill Efcavitch who is our senior VP of R&D. I’ve worked with Bill for 21 years and known him for 27. I was a nucleic acid chemist in my previous life, and was an early customer of ABI in their DNA synthesis business. Bill came from Marv Caruthers lab at University of Colorado where they invented the phosphoramidite chemistry. He moved as a post doc from Colorado to ABI and brought that chemistry with him. I just want to give Bill some credit, he’s been part of the inventorship of phosphoramidites and the whole line of chemistry that ABI put out, their dye terminator chemistry, their big-dye chemistry and now in a sense is the father of virtual terminators. The guy is an absolute genius at bringing together chemistry and automation and has literally been involved in every major analytical molecular biology success story scientifically since 1980. We have the right guy doing this.

AW: Yeah I think that’s really in keeping with what seems like the quality of people at Helicos working there and running the show. I wrote in my article that the whole company seems chalk full of pretty remarkable people

SL: It’s a pretty hard thing not to join this company, I moved back here from California after 30 years

AW: That says enough.

AW: Let me move on here, you really tied up some loose ends for me in terms of the technology, I’d like to continue asking you about the performance of the HeliScope system, is there any way you can give us some updated statistics?

SL: What we’ve been saying in the public domain is that we’ve had two production prototype HeliScopes running since around the first of the year; we call them mules, because they are the place where we do all our testing. We’re also doing all of our systems integration, so one is always running and generating results while we’re putting new parts on the other doing the integration testing. During the IPO period we brought these prototypes up, and were confident enough with the performance we saw from them right after the IPO that we began the build of the commercial HeliScope. The instruments we will be shipping are on the factory floor today, and will be put through a very rigorous verification and validation program. People have been asking us why we haven’t been doing beta, which is a program you tend to do with existing technologies and if you’re doing the next version of something already in production. My experience, and Bill’s, with ABI for 25 years, is that with something as new as this, what we want to do is sort of a beta on the factory floor. Where a company normally builds, tests, ships and then installs knowing that the tests, either on the factory or at the install, correlate to good customer needs, in our current case we’re still figuring out what those parameters are. Along with this testing, we are bringing in-house customers who are going to be part of the CTS program. We have our service people learning about the instrument by helping to build it. We’ve also got field application scientists who are learning how the instrument works and who’ll support the instrument in the field. Our goal is to ship product by the end of the year; where we will send units to those customers and confirm performance with the same tests that were done on the manufacturing floor, but now in their production labs.

AW: So you’d say it’s safe to say that any numbers that have been bandied about concerning throughput at this point are still in the rumor phase.

SL: Well we’ve quoted 25MB/hour for sequencing applications and 90MB/hour for our gene expression application, and but those are ballpark numbers. We haven’t set commercial specs yet because we are still in the process of doing the validation and verification. We wouldn’t have started the commercial build if we had we not seen performance off the prototypes that gave us confidence that we could get near those specs. I can’t give you definitive numbers on this but we would have not have started the build if we were not confident in the system.

AW: Ok and those numbers were talking about, obviously accuracy is paramount.

SL: Absolutely, so the difference between the 25 and the 90 is to get to accuracy. For gene expression applications where you can use tags, you can use compressed sequence space and can deal with errors in an orderly manner, we can run the instrument at 90MB/hour. For sequencing applications where you don’t have that control, what we can do is again take advantage of the single molecule approach and do something that we call multi-pass sequencing. This single molecule advantage comes from the nature of biochemical errors in single-molecule mode, and what we’re finding is that those errors are stochastic. For example, let’s say that I have the same base represented one hundred times on one hundred different molecules positioned at different positions on the flowcell, generated that way because we use a prep process that stochastically fragments a genome. The error rate of that base is the aggregate of those signals. Let’s say we do a run and create a table of errors. What we can then do with single molecule is really neat; after the run, we can literally melt off the strand that we synthesized, and resequence the same templates all over again so that we have two independent measurements. By comparing the tables of each run, we find that the errors are chance events, so that you get the same error rate in the aggregate, but the odds that the same base on the same strand will generate an error is infinitesimally small. What you can do then is take the square of the error of both runs to increase accuracy. So what were doing is again taking advantage of single molecules and of the fact that we’re starting with 3 billion strands to do a dual pass run. We literally do two sequencing runs to get accuracies that are good enough.

SL: Now the issue of why we need to do this dual pass is that if we look at the errors that occur, the insertion error rates and substitution error rates are infinitesimally small, but what looks to be deletion errors are high. These deletion errors are in fact dark base additions; there are some proportion of molecules in the virtual terminator formulation that are dark. The base that’s added isn’t detected through signal and we’re pretty sure that we know what the problem is. We think we can solve this through process development because as you know were still dealing with research pilot-grade manufacturing. We think as we move through to full production, we will get development to a highly repeatable process of these molecules and we will be able to control quality to get that dark base addition down to a point where single pass accuracies will be sufficient. The result will be that we’re all of a sudden at 90MB/hour for single pass sequencing, and we think were currently going to be industry best at 25MB/hour.

AW: Yeah 90MB/hour at acceptable accuracy is definitely I think twice or four times better than the other industry leaders.

AW: And the other key thing about your issue of sensitivity is that the error rate on base one and the error rate on base 30 are exactly the same. If you look at the error rate on the ensemble methods, this rate increases as a function of length.

AW: In academia at least the highest fidelity polymerase misincorporates one in every million bases.

SL: There was a report in GenomeWeb last year that we corroborated that we have a relationship with Floyd Romesberg at Scripps about protein evolution so one of the things that we are looking at is new polymerases via protein evolution.

AW: That’s a pretty exciting area of research as well.

SL: Another thing we’ve been able to do is attract an unbelievable Scientific Advisory Board. Most companies would be happy to have one of the people that we’ve got. There are experts across so many different disciplines interested enough in our technology to be involved in the SAB, and this gives the whole team a huge advantage because they can utilize people like Steve Chu who himself won a Nobel prize for single molecule work.

AW: Yeah he gave a talk here not too long ago, “optical tweezers” mindblowing stuff.

SL: The guy is just amazing, and we’re starting to round out the SAB on the application side. We have Victor Velculescu from Johns Hopkins and we have Eugene Meyers, a bioinformatician previously at Celera now at Janelia Farms.

AW: I couldn’t agree more concerning the SAB of Helicos, I actually wrote about how impressive it was.

AW: Let me just finish up with a couple of quick questions, what about cost of the HeliScope system, any ballpark figure?

SL: We haven’t set price yet, it will be more expensive than anything out there, but what we’ve been saying is that there’s two key reasons why. One is our system configuration; we provide with the HeliScope a not-inexpensive image processing tower that in near-real-time converts the raw data to base calls, reliable bases, so that at the end of the run, you have the ability to just port that data to your bioinformatics engines. Some of the other technologies at the end of their runs have raw data and require the customer to provide the compute facility to do the image processing, so it’s an apples and oranges comparison with the HeliScope because the tower is not an insignificant part of the cost; we’re talking about terabytes of data that need to be processed and stored. The second reason is more strategic, this is classic Bill Efcavitch and I think a testament to the smarts of the board. They looked very closely at this market at the outset and concluded that this market is a long term play and $1000 genome performance is going to be the real inflection point in the marketplace. So what Bill did was to design an instrument with headroom in imaging capacity. 3 billion strands is today’s density but we have licensed technology from Steve Quake where we can potentially increase the density of that by a factor of four. That’s 12 billion strands that we would be able to look at, and the instrument has build into it this imaging capacity. Think of it as headroom; an instrument that will get people to the $1000 genome. From a marketing perspective, we believe we can look at a customer and say this instrument will get you there. How you will improve performance and how you will get decreases in cost is by buying the next kit that will have in it the better flow cells, the more dense flow cells, the better chemistry cycles that use less reagent and have more stability to them. Through that process we really believe in the future that we can give people $1000 genome performance. That’s important not only for sequencing but for things like digital gene expression, genomic signature sequencing, apps like methylation. To me the really exciting thing about this from a marketing guy’s perspective is any time Bill makes an improvement in the assay, any customers’ application benefits from it because the assay is agnostic to the application.

AW: Yeah I think it’s elegant business planning to build a higher quality and expandable machine rather than something that may be outdated in 5 years.

SL: It gives the customer confidence that an instrument will last because it’s not a cheap investment and it also gives us the ability not to have to turn around and make a continued investment in engineering, we will continue to do maintenance engineering and make the thing better and cheaper but we don’t have to build another instrument for a while. So what we’ve done with our R&D expenses, and you saw this right after the IPO, is that we’ve started a CSO office to do genomic collaborations; we’ve hired Patrice Milos from Pfizer. She was their head of pharmacogenomics and executive director of their whole molecular profiling at Pfizer development. She’s building a world class genomics team to do collaborations with customers. So Bill focuses on making the assay cheaper and what Patrice will continue the effort to work with customers to build all the applications and publish good science with them on the HeliScope.

AW: Right, publishing good science is always key.

AW: That’s actually really exciting, that’s much more than I expected, an expandable machine. One last question which is more of a business question. From a medical standpoint, the area of human resequencing is presumed to have a much larger therapeutic interest than de novo sequencing in terms of where the market share is. Is this the market that Helicos is specifically focusing on this type of an application or more broad de novo and resequencing efforts?

SL: The whole focus of this company from day one was not to be a replacement for ABI sequencers. The goal of this company was to enable, through the price performance and the simplicity of the workflow that you’ve described, an ability to let people ask new and more important questions of the genome. So we believe that we can create value by growing the market, not just replacing the current sequencing marketplace, and in doing so, medical resequencing is a huge opportunity, but it’s not the only opportunity. There are people incredibly interested in looking at whole genome methylation studies, making quantitative measurements of the genome, not just quantitative measurements of the transcriptome; applications like copy number variations, ChIP sequencing and measuring the amount of a transcription factor binding to DNA. These measurements are all very important and we fundamentally believe we’ve got the best technology when you get to quantitative apps because we do so little to the sample in prepping that we don’t perturb what is digitally in that cell. We’ve found a tremendous interest from people beyond the genome centers, they’re mostly in the academic health centers and people doing translational research. The fact that you can do single molecule measurements, and you don’t have to build a genome centers’ worth of infrastructure causes something to resonate when they realize that the patients DNA could be sitting on the HeliScope and being measured. It resonates and we’re getting tremendous interest about it and finding market segments within this life science area that no other technology can get to. A lot of these competing companies are saying “we’re going to find every ABI sequencer and replace it”. That’s not what we want to do, we want to add value to the marketplace because there are places where there is lots of money, lots of samples, and lots of known genome annotations but still we think there will be new genome annotation and people will do genomics and genetics in new ways which will grow the market. And that’s our whole idea.

AW: That was a great response and in your response I came up with some thoughts of how the single molecule technology could be applied to epigenetics and such. I couldn’t agree more that it’s a really exciting prospect.

SL: Cancer stem cell research is a hot area; people are figuring out how to purify them, but quantitative measurements of those stem cells to understand the functional genomics of how regulate themselves is hugely important and we’ve got people looking to us to collaborate; they look at single molecule measurements and they look at the ability to do these digital experiments on sequence and quantitation and say “this could be the answer.”

AW: That’s really great to hear that. That’s it for all the questions I have, are there any other important facts or events that you’d like interested parties to be aware of?

SL: I just wanted to clarify the sensitivity issues with you, I wanted to reemphasize the issues around homopolymer stuff and then I wanted to explain the accuracy a little further. Those were the key questions that you asked me, I didn’t have anything beyond that given the scope of what you’ve written about us to date. But I hope that you’ll be interested in us and help keep the community aware of who we are and what we are doing.