Notes»
▲1 The nature of epidemiologic research means it is more often done collaboratively than not. Collaborative research is understandably difficult and can add high overhead to a scientific project, yet scientists are being pushed to do more of it with little extra support. This additional overhead can slow down research, which means wasted money, lost opportunity, and frustration for scientists. Coordinating Centers (CCs) are one tool that can help offload some of the administrative burden from investigators.
▲2 A well-built CC can ameliorate some of the overhead and offload some of the burden from researchers by managing the administrative aspects, facilitating collaborative activities, and empowering investigators to focus on the science, thus improving every stage of a study. As a result, funded projects can run more smoothly and can be more likely to reach their scientific goals, thus creating a greater return on a funding agency’s investment. A good CC will have the available expertise and resources to facilitate protocol development, ensure timely information exchange, and coordinate data management and statistical analysis. CC staff will also take the lead on bringing all parties to the table and ensuring all participants have an equal voice in the areas of the project appropriate to their expertise. It is these “soft” areas of research that become increasingly important, even mission critical, in collaborative projects and which receive the least attention from research teams.
▲3 Although it is tacitly recognized that a good CC is essential to the success of any multi-site collaborative project, very little study has been done on what makes a CC successful, why some CCs fail, or how to build a CC that meets the needs of a given project. Moreover, very little published guidance is available, as few CCs outside the clinical-trial realm write about their work
[see, for example, 1-4 ]. CC directors are largely forced to reinvent the process through trial and error with each new collaboration. This wastes not only precious funds but also experience and time, delaying the achievement of scientific goals. Without well validated “best practices,” it is impossible for a CC manager to be sure s/he is not only avoiding the worst mistakes of the past but is maximizing resources by running a CC as effectively and efficiently as possible.
▲4 The ACC is a collaborative research project that has made strong scientific and organizational progress over the past three years. Our hope is that, by sharing our experience building the ACC CC, we can begin a conversation about what it means to run a coordinating center for multi-institutional collaboration and help other collaborative projects solve some of the issues associated with collaborative research (as well as learn from others), thus moving science forward with greater ease. Our group plans to engage in future research to investigate how other CCs run and what differences there might be between domestic and international collaborative projects.
Notes»
▲1 One of the most substantial barriers to successful collaboration is a lack of trust and community among participants. It is almost inevitable that scientists will eventually be collaborating with investigators with whom they have been previously competing for funds. The first objective in building a CC is to transform a loose group of individuals into a community of researchers all focused on the same goal.
▲2 A Policies & Procedures Manual is essential, laying out expectations, rights, and responsibilities for collaborators. It is crucial that everyone involved in the collaboration has a firm understanding of what it means to be a “member” of the group.
When the CC first started working with the ACC, there was just a nascent sense of community. The ACC was a loose consortium that met twice each year and exchanged emails occasionally. Our first order of business was to develop that loose consortium into a more coordinated community of investigators focused on building the ACC. At the next Consortium meeting, we refined and expanded the goals of the ACC, coming to this consensus:
The mission of the ACC is two-fold: (1) to serve as a platform for cross-cohort collaborative projects and combined analysis, and (2) to act as an incubator for new cohorts.
We debated what it meant to be a member of the ACC, what rights and responsibilities went along with that. In other words, we defined who we were.
▲3 We were struck at our recent meeting in Washington, DC, by how different the energy of the group was as compared to earlier meetings. The atmosphere was much less formal, more relaxed and open. Clearly, members felt comfortable with one another and felt at ease speaking their minds.
As part of this “definition”phase,
▲4 we also registered a domain name
[6]. While this seems like a technical detail, it was an important signal to the group that we were a legitimate organization and planned to continue to grow the ACC. The domain serves as an anchor for the collaboration, a place current and potential members can bookmark and visit for current information on the project. We also created email lists, such as “all members,” utilizing this domain. All of these steps were aimed specifically at helping the ACC gel as a community..
Whereas collaborations frequently begin through established social networks, they quickly expand to include people who have not yet developed comfortable relationships. A CC is responsible for building trust by developing both the public (group-wide) and private (inter-investigator) spaces of the collaboration, as described in the literature on Communities of Practice
[7].
▲5 The ACC, for example, fosters collaboration through two face-to-face meetings each year, one in the US in conjunction with the American Association for Cancer Research (AACR) annual meeting and the second in Asia, hosted by a different member cohort each year. At these meetings, we ensure that there is enough time for personal conversations. Group dinners allow investigators to nurture existing relationships and begin new ones.
Notes»
Large collaborations have high overhead and require a substantial amount of administration, which we categorize as Operations Management. In the CC, we are responsible for the production of documents such as the Policies & Procedures Manual, which puts in place a structure for the ACC. We also guide the development of study protocols and manage all IRB documentation.
▲1 Information management is a huge challenge for collaborative groups
[8]. A healthy, thriving collaboration generates a substantial amount of “stuff”, also known as artifacts. These include, but are not limited to, data, specimens, and manuscripts, as well as writing teams, working groups, SOPs, IRB approvals, tools, agendas, and "statistical" meeting minutes. Collaborators, including CC staff, need to be able to trust that they can find the artifacts they need when they need them.
Notes»
▲1 One of the overarching scientific goals of the ACC is to achieve data harmonization so that data sets from independent cohorts can be combined and analyzed as one meta-cohort. To assist with that goal, the CC utilizes
▲2 a Common Data Elements (CDE) specialist who focuses on data harmonization. With an appropriate scientific background and extensive experience developing questionnaires and mapping questionnaire elements to data dictionaries, the CDE specialist has analyzed the instruments used by several participating ACC cohorts to establish a common, core set of questions that all groups are asking and that are important to the scientific questions the ACC is trying to answer. This is exceptionally challenging. We need to take into account the scientific goals of the individual studies, while also navigating language and cultural differences.
The CC has put substantial effort into helping the cohorts harmonize their data-collection instruments, with the ultimate goal being the ability to conduct cross-cohort analysis of the data collected. This is an enormous challenge. Consider the following three questions:>
- Have you ever had cancer?
- Have you ever been treated for cancer?
- Do you have cancer?
These questions differ somewhat and yield data that are not necessarily comparable. The challenge then becomes, do we manipulate the data afterwards or do we try to harmonize the data collection instruments in such a way that each individual cohort is not compromised but that, when combined, the data are comparable and harmonizable? This is a continuing area of focus for the CC. To achieve the goal of full data harmonization will require an enormous time commitment on the part of the cohorts as well as the CC, as each data element needs to be negotiated and discussed extensively to ensure that everyone agrees on its meaning. In this specific area of data harmonization, the organization P3G has put considerable time and effort into developing tools to support collaborative research [9].
▲3 Lastly, at least one member of our statistical and data management team (SDMT) attends each general membership and working group meeting. This team is deeply involved in the development of each cross-cohort project’s protocol, guiding the selection and definition of the data set that will be submitted and crafting the analysis plan. Members of this team answer questions cohorts have regarding how to extract data elements correctly, then work on quality control and data cleaning once the data have been submitted.
Notes»
▲1 At the Spring meeting in San Diego, CA, in 2008, two of our members proposed that the ACC begin a pilot project on Body Mass Index (BMI) in Asian populations, the first cross-cohort collaborative project of the ACC. The CC was instrumental in moving this project forward and keeping it moving. We began by working with the FHCRC library to gather a list of cohorts working in Asia that were focused on BMI, ending up with a list of more than 75 cohorts that met our minimum criteria for inclusion.
At the first BMI Working Group (BMI WG) conference call, each participant agreed to contact the cohorts of his/her country. Subsequent calls and many emails led to the development of a project protocol, specifying the analysis plan and requested variables. Along with the WG leaders, CC staff developed a survey that was sent to potential collaborators, soliciting information on each cohort, such as number of participants, years of follow-up, number of deaths, and whether they had collected data on a variety of variables (height, weight, smoking, alcohol use, for example). A spreadsheet of preferred primary and confounding variables was developed and distributed to the participating cohorts so that they could begin preparing data sets.
Our statistical team analyzed each data set as it arrived, based on the approved analysis plan. Because our collaborators are not collocated, we needed to devise a way for them to review the analysis of their data and answer any questions from the analysis team. We decided to do this on the ACC portal. We created a “workspace” for each of the participating cohorts, where we posted files containing graphs of their analysis results and any questions from the analysis team. We also created a discussion area where cohorts could respond to the questions, ask questions of their own, and receive more information from the analysis team. These workspaces were accessible only to the cohort PIs and their data management teams, as well as the WG chairs and the CC staff, ensuring a high level of confidentiality and promoting frank discussions. Given our goal of developing trust with this first cross-cohort project, it was crucial that no one felt embarrassed if they made an error building and transmitting data or had misunderstood the instructions.
The workspaces were a resounding success. They saved a substantial amount of time and money because we did not need to have lengthy conference calls. Cohorts could take their time looking at their analysis results and formulating thoughtful answers to questions. Although all of our PIs are fluent in English, not all of their data managers are. Using the workspaces gave the teams room to work in their first languages, then share their thoughts in English.
The BMI project has been successful beyond our expectations, resulting in the first published paper of the ACC
[10].
▲2 We have clearly proven that the ACC CC has the resources and expertise in place to support a scientifically important and interesting study. We have also shown that the ACC cohorts are interested in pooling data when the project has scientific importance. Harmonizing data elements for the sake of harmonization is not interesting; harmonizing for the sake of moving science forward is.
One of the interesting outcomes of the BMI project is the predictable realization that, once the dataset has been collected, it can be used for other purposes. Potential projects are spurred by discussion of the existing data set.
▲3 Since the BMI project was completed, several additional projects, using this core BMI-focused dataset, have been proposed. Several other BMI-focused proposals using these datasets are currently in the process of being developed or reviewed for implementation. Cohort PIs have agreed to send an English-language version of their survey instruments to the CC, so that a map of available data elements can be created. It is our hypothesis that such a map will spur even more new ideas as investigators learn what data are available across a very large number of individuals.
Notes»
▲1 The ACC submitted its first two grant proposals to NCI in 2009; both used the R03 small grants mechanism. The first proposal focused on rare cancer and BMI, and the second will support a subsequent phase of the original BMI and mortality project. Happily, both projects have been funded.
▲2 We have investigated many major foundation grant programs and government funding programs in an effort to secure long-term funding for the CC itself, but have not yet found the right program. Our unique structure makes it possible for us to accomplish things that other groups cannot, but also makes it more difficult to find our funding niche. Funding agencies want to fund projects, not infrastructure, no matter how crucial that infrastructure may be.
▲3 As such, we are currently drafting two larger grants that will include substantial support for the CC. We have also been extremely fortunate to receive such generous support from the FHCRC. While the amount of support required varies substantially according to the projects underway, the CC generally requires funding in the range of $300,000-400,000 per year. As such, we are currently drafting two larger grants that will include substantial support for the CC. We have also been extremely fortunate to receive such generous support from the FHCRC. While the amount of support required varies substantially according to the projects underway, the CC generally requires funding in the range of $300,000-400,000 per year.