I’m back!

by Gwen on March 3, 2014

Well, it has been a real roller coaster ride since I last posted.

The iResearch project was put on hold when I was diagnosed with breast cancer last summer.  I took hormone therapy for 7 months and then had a double mastectomy on 1/6/14.  Here’s my goofy yet informative post-mastectomy video diary:

I’m happy to report that the iResearch project is back on track!  I was joined by an amazing group of women who are all living with breast cancer:  Valerie Green, Meg Patterson and Autumn Stanley.  Helayne Waldman, a breast cancer nutritionist, has also signed on as an advisor.

On August 14, 2013, we formed a new non-profit called People-Powered Research (a.k.a P2R).  Dr. John Boscardin, a biostatistician at University of California, San Francisco  agreed to serve as advisor before the project was put on hold.  I really hope he is still available.  We are in the process enlisting more biostatistical support.  Once that is done, we will set up our web page and find a fiscal sponsor.

If you have time and skills to donate to this worthy cause, please send an email to People-Powered Research.  Thanks!

 

{ 0 comments }

In 2009, a research article was published in a highly respected journal.  It’s conclusions surged on the shores of environmental science like a tidal wave – the earth’s oceans would rise as high as a whopping 32 inches by the year 2100.  That would put the homes of 25 million to 40 million people at risk in the not-so-far distant future.  The authors, Mark Sidall and his colleagues at the University of Bristol were hailed for their research.  As a person who lives right next to the ocean in the San Francisco Bay Area, this research even had a large impact on me.

But by 2010, these researcher-heroes were now seen as  slackers who had hampered the progress of environmental science by their publication of erroneous research. They retracted their once-lauded paper (the first to have done so in the journal, Nature Geoscience).

What was their mistake?  It was overlooking something called a confounding variable.  Confounding variables have been the bane of many a scientist’s existence, making folks look like fools and ruining careers in the process.

So what is a confounding variable?  It is simply something that can introduce error into a statistical calculation.

For example, let’s say I take 100 women with breast cancer and split them into two groups.  One uses Pill X and the other uses a sugar pill (placebo).  At the end of the study, those who took Pill X for 5 years had one death from breast cancer.  The folks in the placebo group had 50 deaths.  So, then the manufacturer puts out a press release (“Pill X results in a 50-fold increase in breast cancer survival!’) and sits back preparing to rake in millions of dollars.  But what about confounding variables?

An enterprising statistician looks at the data and notes that everyone in the Pill X group had Stage I breast cancer, while all of those in the placebo group had Stage IV breast cancer.  In this example, breast cancer stage is a confounding variable.  When the manufacturer reluctantly redoes the study making sure there are equal numbers of women with Stage IV cancer in each group, we find the death rate is suddenly the same in both groups.  Scratch Pill X off the list!

So you can see it is very important to note and adjust for confounding variables in every clinical study.  It takes a lot of knowledge and experience to pull that off successfully.

Have you ever wondered why phase III clinical trials typically have only two groups of patients being studied?  The only difference between the groups is that one is getting the new treatment and the other is not.   It turns out that this design is best way to prevent known and unknown confounding variables from introducing errors into the research results – if everyone in the study is exactly the same except for whether they took the drug, then any differences in outcome must be due to the drug.

But how about a study such as our proposed MBC iResearch project?   There will be, not one, but a hundred or more differences from one participant to another. Some folks will be taking Vitamin D, others will be using iodine, still others will be exercising regularly, some will choose not to do chemotherapy. You get the idea.  These differences are all confounding variables.  When the standard statistical algorithm has a hard enough time compensating for one or two cofounding variables, where do you find one that can handle dozens of such factors and still give you accurate results?

As I entered into this iResearch project, I knew one of the hurdles ahead would be finding just such an algorithm that would convert our mountains of data into a relevant guide to what works and what doesn’t.  I had planned to spend lots of time going from one epidemiology department to another trying to find a new kind of statistics that could handle literally hundreds of confounders, both known and unkown.

But then fate stepped in!  While flying to the east coast last month, I ran out of reading material and popped in a newsstand to buy a magazine.  The latest issue of Discovery magazine looked interesting and so I picked it up.  Lo and behold, there was an excellent article about two Harvard Medical School researchers, Jeremy Rassen and Sebastian Schneeweiss, who had designed a powerful statistical algorithm that could sift through mountains of data and automatically sort out confounding variables, known and unkown.  Their algorithm could handle up to 500 confounding variables at one time!

This new approach is called the high-dimensional propensity score.  And it’s not just a great idea whose time has come.  Rass and Schneeweiss have put their algorithm to the test.  They entered the raw data from previously published clinical studies and came up with the same conclusions that researchers had reached the old-fashioned way – painstakingly finding and compensating for each potential confounder one by one.  And the icing on the cake:  the creators of this wonderful new approach to statistics have made their algorithm available for free on the internet!

You can imagine my excitement.  No doubt the person sitting next to me on the plane must have been quite bemused by the dreadlocked African-American woman gushing over a statistical algorithm. (“You’re all excited about a high-dimensional …what?”)

So here are the next steps.  One, find a graduate epidemiology or statistics student to participate in this project.  The student will help me understand the strengths and weakness of this algorithm and will help us design our database to fully access its strengths.  In return, this student will co-author the research paper(s) that will come out of our endeavors.  I have already put out a call to Bay Area epidemiology departments.

Second, the first teleconference of the MBC iResearch group will happen soon, hopefully within the next 3 weeks.  Everyone who responded to my call to action (thanks to those 20 dedicated women!) will be getting an email so we zero in on the best time.

As always, I look forward to hearing your comments or questions.

Aut viam inveniam aut faciam.

*A tip-of-the-hat to Amos Zeeberg for his edifying article, “Hidden Truths of Health” in the August 2012 edition of the Discover magazine.*

{ 4 comments }

Calling all statisticians!

by Gwen May 1, 2012

In order for the MBC iResearch project to succeed, we need to cross two hurdles. The first hurdle is getting a core group of 10 or so folks with breast cancer (metastatic or not) or affected by breast cancer to form the focus group that helps determine the scope and design of the database.  Today, […]

Read the full article →

An edifying discussion with Teresa Peters…

by Gwen April 25, 2012

Back in March, I was fortunate to meet many people actively working to add to the research around complementary and alternative approaches breast cancer. One person in particular stands out: Teresa Peters, a very bright an amazing woman with stage IV breast cancer. When she heard my call for those interested in building an online […]

Read the full article →