EMIF: E-managing the Future of Health Data
16th-17th March 2015, Budapest
REPORT
Whatever the power of digital technologies and no matter the size of the data mountains, the true potential of digitised or machine-readable information relating to health will not be realised without the key ingredient of trust.
The IMI-EMIF (European Medical Information Framework) has put engendering trust at the centre of the programme’s vision of creating the European hub for health and care data intelligence, from which new insights into diseases and their treatment will flow.
In support of these objectives, EMIF’s 2016 symposium in Budapest explored how real world health care data can be used to inform the development of new medicines and underpin approvals, access and use – across the product life cycle.
The Budapest symposium focussed in on how this existing health data could be applied to inform and optimise the development of new medicines and underpin access, pharmacovigilance and demonstrations of effectiveness in the real world.
In a day of high-level discussion and debate by experts across the healthcare chain, a series of points emerged as needing attention to address barriers to the re-use of data, and support interactions with regulators, health technology assessment organisations and access bodies, to ensure they have the right evidence and maximise the benefits of data re-use for the good of patients.
Action points:
- Demonstrate there are benefits for all
If trust is central to achieving EMIF’s objectives, then demonstrating that there are clear benefits for all partners in the re-use of data is a prerequisite. And the interests of all stakeholders must be balanced via a quid pro quo.
- Openness and transparency hold the key
The essential requirement is to pull public and private data resources together, to enable collaboration and joint problem-solving, as well as to have clarity of purpose on real world data use within health research.
- The nature of drug development needs to evolve
Real world data provides the potential to identify high risk populations and, in advance of symptoms, monitor and predict the onset of disease. Future drug discovery needs to be informed by such real world insights, as much as it is informed by disease symptomatology today.
- Establish the context for making greater use of real world data and share the risks
Deployment of real world data in drug discovery and development requires people to work on consensually agreed research questions that reflect not the individual requirement of academics or companies, but the views and needs of regulators and patients.
- Social media could be accessible within certain rules
Data from social media represents an invaluable source of real world data. However, it is noisy, and scientists do not like it as they see it as being subjective and biased. Effort is needed to determine how and when social media outputs are useful, for example detecting adverse events or understanding what aspects of their disease most burden people with long-term conditions.
- Ensure data are fit for purpose
It is necessary to address issues of quality control and quality assurance of data. There should be collective agreement, supported by regulators, on acceptable data standards for specific purposes.
- There is an urgent need to increase skills levels
There is plenty of data, but very few people with the expertise to interpret and exploit it to generate evidence and promote public health. This gap urgently needs to be filled.
- Promote the importance of re-using health data
There is individual as well as societal benefit in allowing re-use of data. Education should support the public in understanding the benefits to them and society of real world data-driven research. People should be shown what broad consent can mean for them and given a voice in how, and for what purposes, data are used.
- Know where your data are coming from
Data has context; why was it collected? What was collected? It is critical to understand this context in selecting the appropriate databases for a particular piece of research.
- Collect examples of real world evidence in practice
Reaching the stage where real world evidence is routinely applied will vary from one healthcare system to another. It is necessary to collect real life examples that demonstrate the advantages and will motivate busy professionals to change their practices, but more importantly understand the need for high quality medical record keeping to support real world data access and use.
In summary, there is huge potential value in using real world data to optimise the product life cycle, but much more work to be done to achieve this.
The vision is that the end result of the five-year EMIF programme will be an ecosystem where data sources are clearly mapped, researchers can assess if a particular source suits the objectives of specific projects and they can readily engage with data owners to get permission for its re-use.
As Vannieuwenhuyse noted, beyond the confines of RCTs, pharmaceutical companies currently have little exposure to large groups of patients. Yet information generated in the course of their care, including health records, pharmacy, lab tests, claims data, and so on, could provide crucial inputs in the development of new drugs.
To ensure that the work undertaken by EMIF has relevance –and can shed light on – unmet medical needs, EMIF-AD is assessing if existing datasets and bio-banks can be used to identify early markers of Alzheimer’s disease, while a second project, EMIF-Metabolic is studying risk markers for developing metabolic complications of obesity.
These elements of EMIF are underpinned by EMIF-Platform, in which a number of tools, for example, to support biomarker discovery, and a common data model, are being developed.
Real world data has applications from biomarker discovery and predictive modelling in discovery, to trial design and recruitment in development, and on to providing evidence of effectiveness and monitoring safety once a drug is approved and on the market. “There are clear benefits for all partners in re-using data,” Vannieuwenhuyse said.
Johan Liwing, Director, Market Access RWE Partnerships, Global Commercial Strategy Organisation at Janssen, described how the company is partnering with leading institutions in the US and Europe to promote the use of real world evidence in drug discovery.
The aim is to analyse multiple data sources to generate evidence of disease pathways, healthcare delivery and the effectiveness of treatments, and use the outputs to improve and advance methodologies and support medical decision-making.
The vision is that this underpins a paradigm shift, in which rather than diagnosing and treating observable symptoms of disease, it will be possible to pinpoint the initiation of a disease-causing process and treat to pre-empt the symptoms.
“We believe this will be a big part of the future. But there are challenges, because it will change drug development,” Liwing said. To achieve this shift it will be necessary to continuously capture and monitor health data, in order to predict the onset of disease processes.
One example of how Janssen is applying this ‘Disease Interception Accelerator’ concept is in childhood Type I Diabetes, where tracking the production of autoantibodies against HbA1c (glycated haemoglobin) levels, has been shown to be predictive of progression to insulin dependency.
“If we could monitor [these two parameters], we could delay development of symptoms,” Liwing said.
Identifying precise research questions that reflect the interests and concerns of patients and regulators, whilst acknowledging the requirements of academics researchers and companies – and which can be addressed by interrogating real world data, is the route to promoting risk-sharing amongst stakeholders, suggested John Gallacher of Oxford University’s Medical Sciences Division and Director of the UK Medical Research Council’s Dementia Platform.
To take one example, the link between blood pressure and heart disease illustrates the need for large study sets: with a sample size of 5,000 there are hints of where the greatest risk may lay; at 50,000 subjects the focus sharpens. All becomes clear when looking at 500,000 subjects, Gallacher said. “It’s effectively a definitive answer: large datasets equals answers.”
The challenge in advancing treatments for dementia is to apply large datasets to identify early determinants and apply these findings to the discovery and development of drugs that delay onset, relieve symptoms and slow progression.
Information that is relevant to dementia and other disorders is held in a range of disparate databases, collected for different purposes, in different countries. There is a huge task of data interpretation to combine these sources in a meaningful way, and use them to answer questions. “Rows of people, with columns of variables is the fundamental challenge.”
Gallacher proposed simplifying data to enable its integration and analysis, pointing to UK Biobank data, from which it is possible to identify participants with memory deficits and APOE4 (apolipoprotein E) markers, as potential recruits to trials of Alzheimer’s drugs targeting APOE4.
This example illustrates how real world evidence could support the transition from a functional definition of a disease, to defining it by biological mechanisms, providing a far more potent base on which to develop a new medicine.
In contrast to clinical trials – where real world evidence can be used to identify subjects at high risk – in public health it is necessary to look at the population as a whole in order to formulate policies that will provide the greatest benefit.
“The issue is not just throwing data at it, but how you use data to answer the question,” Gallacher said.
Ferran Sanz, Director of the Research Programme of Biomedical Informatics, Hospital del Mar Medical Research Institute, Universitat Pompeu Fabra in Barcelona, outlined the many and various – and voluminous – sources of chemical and molecular biology data in Europe, and described examples of Innovative Medicines Initiative (IMI) projects in which these sources are being integrated and applied to improve drug development and safe use in areas including toxicology and pharmacovigilance.
Although there may be distinct realms, biomedical research is a continuum in which one element informs another. Each is generating huge volumes of data. For example, there are more than 20 million journal papers in an electronic format, genomics and other ‘omics databases contain many petabytes of genotypic and phenotypic information, there is much freely available information about small molecules and protein structures, millions of electronic health records, digitised medical images and inputs from social media (where health is one of the most aired topics).
“If we are able to interrogate all these heterogeneous sources of information, there would be a better view of diseases and therapies,” Sanz said.
The E-Tox project, for example, has data mined millions of electronic health records to find associations between particular drugs and adverse events. Once these signals have been picked up, there is a search for the possible biological underpinnings, using computer analyses to look at known interactions of drugs and proteins, and of how proteins are related to disease pathways.
In the project, legacy reports from pharma companies were integrated with public sources to create a combined database of human safety information. It is hoped this will enable reliable in silico prediction of side effects in the critical stages of drug development, reducing attrition and the requirement for animal testing.
Another example of the power of large biomedical databases comes from the DisGeNet, which contains 500,000 records of gene-disease associations. Amongst other applications, this can be used to construct diseaseomes and understand co-morbidities by sketching a network of relationships between diseases based on common molecular backgrounds, Sanz said.
The above are notable individual examples of the insights that can be unlocked from biomedical data stores. The case of Estonia, where there is public consensus and full legal backing for the use of real world data, underlines the far greater potential value which arises from taking a comprehensive approach that embraces all data sources.
The Estonian Biobank contains data on 52,000 participants, or 5 percent of the adult population. As Tõnu Esko, Deputy Director of Research at the Estonian Genome Centre noted, while “there are larger biobanks” the broad informed consent, legislation in the form of the Human Genes Research Act, and the country’s nationwide e-services backbone, make Estonia’s biobank a more powerful resource for research.
The e-government services network runs off a common platform, through which it is possible to link all the databases. Researchers using the biobank can integrate public repositories including hospital records, pharmacy, health insurance information, causes of death and other disease-specific registries.
Taken in combination, it becomes possible to assess individual risk of developing disease, based on genetics, environment, comorbidities and age, Esko said.
This can inform public health measures to support prevention, for example, sending people who have had one heart attack mobile phone SMS messages to encourage them to maintain lifestyle changes.
The Estonian Biobank provides the foundations for the Estonian Programme for Personalised Medicine, in which data from all major databases will be integrated and interrogated to support clinical decision-making and treatment. There are plans to develop an e-health database containing genotypes, e-health records, prescriptions and so on, relating to 500,000 people by 2022, Esko concluded.
Currently, 50 percent of clinical trials fail to achieve the target recruitment date. To address this, EHR4CR has developed a neutral platform enabling re-use of patient data held in electronic health records, with the aim of optimising recruitment.
The project tackled three particular pinch points – protocol optimisation, where the aim was to make it possible to find out how many eligible patients there are and where they are located before a protocol is finalised; speeding up recruitment by making electronic health data searchable for investigators; and sharing data to reduce duplication and cost.
The initial service has now been implemented and evidence is starting to emerge that computer-assisted identification of patients is faster and the reduction in manual input is cutting the cost of conducting trials and the time taken to complete them, said Kalra.
Hospitals are attracted to use the system because it means more of their patients are recruited to trials and their clinical staff stay up to date and are involved with the latest research, enhancing reputations, increasing visibility and improving patient care.
Kalra said the use of the platform is prompting hospitals to put more effort into ensuring data quality, leading to improved internal management and better care.
Following on from the launch of the EHR4CR platform there is a push to encourage its adoption by ‘Champion’ hospitals, which will further validate and improve the technology.
EHR4CR has also prompted the formation of the European Institute for Innovation Through Health Data, as an independent body charged with overseeing the EU data re-use ecosystem. “There is a whole series of brakes that stop us from using data. We want to grow a central point of best practice in information governance and how to share intelligence from health data, and show the value of this in a formal way,” Kalra told delegates.
Real world data from multiple sources has value across drug development, from sketching in the background of epidemiology, burden of disease and unmet need, to protocol optimisation, patient recruitment, and onto enriched studies, noted Alison Bourke, Scientific Director, Real World Evidence, IMS Health.
The benefits include better targeted clinical trial populations, greater accuracy in planning trials, faster recruitment and data collection – all of which lead to cost savings.
However, there are a number of issues that must be addressed, such as compliance with governance rules (which may state commercial entities cannot have access); interoperability of clinical systems; preserving confidentiality when linking one system to another; assessing the quality of third party data; having the skills and expertise to utilise data; and complying with patient consent – and in particular the emerging requirement for dynamic, rather than one-off consent.
Bourke gave a number of examples of how linking databases provides insight. In the case of an epidemiological study of the occurrence of gestational hypertension or pre-eclampsia in living kidney donors in Canada, access was required to databases recording women who donated kidneys, became pregnant and got pre-eclampsia. The study concluded donors were no more likely to experience these conditions than non-donors.
While there are examples of using real world data to identify patients, there are very few that exemplify its use to optimise trial design. Bourke called for more proof of the value of real world data in this aspect of trial design.
As an example of real world data in an enriched study, Bourke referred back to a 25,000 patient double-blind UK study that reported as long ago as 1993, comparing two asthma treatments, salmeterol and salbutamol. The nationwide surveillance study used electronic health records from general practitioners held in the VAMP database (later the General Practice Research Database, now the Clinical Practice Research Datalink), to pre-populate case report forms.
Creating a real world evidence ecosystem is an important vision for the future, Bourke said. “Real world data won’t save a bad trial, but it can offer efficiencies and cost savings, if used effectively.”
As Iain Buchan, Clinical Professor in Public Health Informatics, Manchester University observed, big data does not equal big discovery. The tsunami of data is of variable quality and much of the evidence generated through it is not reproducible.
With limitations in the tools and methodologies for analysis, adding more data creates an unmanageable blizzard. And with a shortage of skills and expertise to interpret data compounding the problem, the result can be an information drought. A bigger sample, representing greater heterogeneity can actually reduce discovery power, Buchan told the symposium.
In real life, care pathways are mash-ups, with individual health care professionals overlaying their inputs on top of each other. In Type II diabetes, a diabetologist will focus on glucose control, a nephrologist on controlling blood pressure. Meanwhile, general practitioners will put a focus on diet and exercise. “Patients are not the sum of the evidence, they are the union of it,” Buchan said.
The heterogeneity of individual responses to a particular treatment presents a need for far more context – however this cannot be found in large real world databases. One way forward is to use real world data to feed what Buchan termed a “missed opportunities detector”, to single out actionable information that can inform improvements in care quality management and support patients in managing long term conditions.
Highlighting and deploying actionable information attracts trust and traction from patients and the public, leading in turn to better data quality, Buchan said.
One example involves an attempt to reduce relapses in patients with psychosis by providing a smartphone app to help in the control and management of symptoms and to provide feedback to healthcare professionals on mood, as an early warning signal that intervention may be required.
Smart phones and other low-cost wearable technologies provide the foundations for scalable, always-on analytics that can be used to promote healthy behaviours, support people living with long term conditions, and monitor patients at risk of complications or exacerbations.
With the technical infrastructure in place, it would be possible for different users to share data and insights, conduct clinical trials in which the patient populations more closely resemble those in the real world, for care quality management, research, public health and commissioning healthcare services.
Such a network would also allow for better feedback, to understand risk and change care pathways far sooner than on the current basis – of waiting for evidence to emerge in the published literature.
Buchan described plans to implement always-on data analytics in the ‘Connected Health Cities’ pilots that are about to get off the ground in the north of England. Different regions are nominating two care pathways and will then – in consultation with the public – look at how they should be optimised.
PHARMO’s database provides the inputs for research in pharmacoepidemiology, drug use studies, epidemiology, post approval safety studies and post authorisation outcomes studies.
Van Wijngaarden used a case study in patterns of use of oral contraceptives to illustrate how large databases from healthcare practice can be applied to understand the risks of rare adverse events, including deep vein thrombosis and pulmonary embolism, which are associated with oral contraceptive use but are hard to detect in RCTs.
He also cautioned that this example underlines the need to understand the context of each individual dataset used in such analyses.
The study, using data from a British, an Italian and two Dutch electronic health record systems, showed differences in the level of use of oral contraceptives by women aged 15 – 49 years ranging from 19.7 percent in one database to 2.6 percent in another.
The big range in levels of use seen in the study is partly attributable to differences in prescribing and dispensing in the three countries, making it important to know how different health care services operate. Furthermore, van Wijngaarden noted, only four databases were used in the study because in other countries oral contraceptives are not reimbursed and therefore not recorded.
“The aim was to present the real issue from a European standpoint of how many women are at risk. But this shows it is hard to know, even with electronic health records,” van Wijngaarden said. “So there are challenges in leveraging electronic health records data: the results are meaningless without the context.”
To deal with these constraints it is necessary to know why data was captured, how it was captured, and what was not captured. It is also important to be aware that context may change over time, for example, reimbursement rules could be altered.
Healthdata.be is a key plank of Belgium’s National e-health Action Plan 2013 – 2018, which has the aim of streamlining and improving healthcare in the country, as Johan Van Bussel, Programme Coordinator of the service described.
The driver for the Action Plan was to simplify the administration of Belgium’s convoluted systems. Healthdata.be is charged with putting in place processes and applications to ensure data collection and dissemination is done in an efficient and secure manner. “The end result should be a lower cost of data collection. Belgium spends a lot on this currently, and re-use is impossible,” said Van Bussel.
Following the creation of an annotated inventory of all the country’s health registries, Healthdata.be has developed a common open architecture, Health Data for Data Providers (HD4DP) that enables users in hospitals, general practice, laboratories and specialist centres to input data.
Data is then secured and pseudonymised for re-use in research via Health Data for Researchers (HD4RES). Implementation began in July 2015 and an important milestone was reached in September when two hospitals contributed data for the first time to a national cystic fibrosis database.
The system is providing an umbrella for 43 research projects that were already underway in areas including HIV, surgical implants, antibiotics and healthcare-related infections. “So there is already a reason for hospitals to standardise and take part,” Van Bussel said.
Companies will be able to access the technology platform through Health Data for Industry (HD4I). “The government recognised the need for industry involvement for pharmacovigilance and also in the context of reimbursement negotiations,” said Van Bussel. The first industry-led project started earlier this year and there is a high level of interest in using the platform.
Sarah Garner, Associate Director Science Policy and Research at the UK’s National Institute for Health and Care Excellence (NICE), described policy initiatives in which she is involved that aim to take real world evidence and use it to make healthcare better. “[Data] is telling us lots of different things. How can we take it to improve practice?” Garner said.
Currently, drug development consists of a “fixed price menu” with phase III trials as the main course. This is no longer appropriate given the rise of health technology assessment and the need to demonstrate clinical effectiveness and cost effectiveness of a particular drug in the context of a specific healthcare system. The shortcomings of the existing approach are now being further exposed by the arrival of targeted medicines.
“The old model is not working well; there are not enough patients and in areas of high unmet medical need, there are no comparators,” said Garner.
MAPPS – Medicines Adaptive Pathways to Patients – is a research programme working within existing legislation and taking ingredients from the existing regulatory development menu and recombining them, to optimise clinical development and access.
The starting point is a ‘Safe Harbour’ discussion in which stakeholders can freely discuss development options and the pros and cons of various trial designs. It is envisaged that this followed by early evidence generation, parallel regulatory review and health technology assessment, leading onto conditional licensing and reimbursement of drugs in specific patient populations, allowing real world evidence to be gathered to confirm/review/extend approvals.
Adaptive licensing shifts the way evidence is used,” Garner said. “You get an initial license upstream after phase II and then use real world evidence to build up the portfolio.”
The GlaxoSmithKline (GSK)-sponsored Salford Lung Study represents an ambitious attempt to bridge from efficacy to effectiveness and demonstrate value for money and benefit for patients in a large-scale trial that is designed to be as close to the real world as possible, whilst maintaining the rigour of an RCT.
Underlining the extent of the gulf, only three percent of asthma patients and seven percent of chronic obstructive pulmonary disease (COPD) patients would meet the inclusion criteria for an RCT. In the Salford Lung Study the only inclusion criteria were a diagnosis of COPD or asthma, and the occurrence of an exacerbation in the three years before joining the study.
The trial is open label; patients could be switched on and off medications and kept up all other existing treatments. There was no additional monitoring over and above the usual healthcare. “We wanted the experience of patients to be as near normal as possible,” said Andrew Roddam, Vice President and Global Head of Epidemiology at GSK.
Achieving this required significant effort in training over 3,000 healthcare practitioners and pharmacists and in setting up constant, automated data collection systems. When the trial started the product being tested was not licensed, creating the need for daily safety monitoring. “It was a huge amount of work to make data collection a reality,” Roddam said.
The COPD arm of the study recruited 2,800 patients, the asthma arm (which is ongoing) 4,326. To date there have been 55,100 patient visits and a total of 235 million rows of data have been entered to the database.
In short, the study is a huge undertaking that required partnership between GSK, the National Health Service, high street pharmacists and academic groups. Three hundred GSK staff and others worked on the trial.
In terms of how setting up and conducting the Salford Lung Study should inform future real world, pragmatic trials, Roddam pointed to the need to imbue a culture of research in primary care, so that care and research can go hand-in-hand.
While the Salford trial is operationally complex, it would be possible to simplify things in future. However, there are issues around scaling data flows and understanding the quality and reliability of data. This is, “about improving the care of patients by doing research embedded within care,” Roddam concluded.
In terms of the structure, six centres that are defined as high innovation performers are supporting EIT Health Innostars clusters in less developed regions that are viewed as having the potential, ideas and talent to improve their innovation performance.
The programme is focussing on the EU’s Horizon 2020 research challenge, ‘Health, Demographic Change and Wellbeing’ with the aim of promoting the translation of Europe’s high quality biomedical research to patient benefit and commercial success, Balázs Fürjes, Director, EIT Health Innostars told delegates. “The activity in a nutshell is about improving the innovation capacity in Europe,” Fürjes said.
After establishing the structure in 2015, the first projects will start this year. “We are looking for projects from teams aiming to bring a solution to market within two years. This could be universities with a technology looking for a market or [companies] with a product that needs inputs,” said Fürjes.
Hungary is home to one of the Health Innostars hubs and Miklós Bacskai, Chief Strategy and Business Development Officer of Healthware Consulting Ltd outlined the state of play in real world data in the country.
The history of electronic healthcare data collection goes back to the 1970s and gathered pace in the 1990s, leading onto the development of the ‘Electronic Health Cooperation Service Space’ a cloud-based, centralised system, enabling information systems and health professionals work together.
Comprehensive medical data is available with the ability to link an individual’s data along the patient journey, and this can be used for authorised research purposes, Bacskai noted. Protocols and analytical tools provide the basis for investigators to deploy these data resources to generate new insights.
There are some constraints however, in that it is only possible to get access to non-personalised data and data may only be accessed off line or in a research room. Queries cannot relate to small groups and access is bureaucratic, with researchers having to justify the public interest in each new project.
There is “a great need for central coordination for real world research” with contract templates and dynamic support capabilities, Bacskai said in summary.
EFPIA (the European Federation of Pharmaceutical Industries and Associations) is driving forward real world data research in a number of projects involving clinicians, regulators, HTA bodies and payers, in particular within the IMI public private partnership.
While he welcomed the degree of accord and level of collaboration, Richard Bergström Director General of EFPIA cautioned against the temptation to “rush ahead” at this stage. “[There is] a need to prepare other people in health care systems, in particular people making the decisions – not everyone is following the scientific papers,” Bergström said.
He is concerned that a move by pharma to put more emphasis on real world evidence (albeit in support of RCTs), will be viewed by some as dialling down regulation and increasing risk, in order to get to market sooner.
In fact, the opposite is the case, Bergström said. “We are now on the cusp of something new – the [large-scale] capture of data, that is, an up-regulation [of oversight].” There is a huge task in bringing everyone along and broadening the audience.
Pharmaceutical companies across Europe are currently in the thick of implementing the European Medicines Verification System, designed to enable the tracking of each pack of medicine from manufacturer to patient. The initial motivation for tracking the entire supply chain in a single market with a free flow of goods is to deter theft and prevent falsified medicines getting to patients.
However, Bergström noted, there are other possible applications, in pharmacovigilance, pharmacoepidemiology and reimbursement. “We are now thinking about how the system can be used and are having discussions with pharmacovigilance bodies and regulators about next-generation risk programmes,” said Bergström,
Another real world evidence programme, the IMI €100 million Big Data for Better Outcomes is currently evaluating proposals for research to promote the use of diverse data sources to deliver results that go beyond the hard clinical endpoints required by the European Medicines Agency, with the aim of reflecting outcomes of treatments that are meaningful for patients.
A first project in Alzheimer’s disease aims to develop new outcomes measures, identify sources of outcomes data and establish a framework around which to gather new data. “Individual companies can’t do these things,” Bergström said. “There’s too much to do.”
In recognition of the scale of such research the pharma industry, working within IMI, is gradually redefining what constitutes pre-competitive and competitive R&D. Alzheimer’s disease represents a potent exemplar of this shift, said Bart Vannieuwenhuyse, Senior Director Health Information Sciences, Janssen.
In addition to Big Data for Better Outcomes, EFPIA members are working together in Alzheimer’s disease consortia in projects including Pharma-Cog, which is looking for biomarkers of efficacy for new treatments; Aetionomy, which aims to redefine the classification of Alzheimer’s disease to support more personalised treatments; EPAD (European Prevention of Alzheimer’s disease), which is pioneering a novel, flexible approach to clinical trials in Alzheimer’s disease using an adaptive approach; and EMIF, which is linking and analysing relevant data.
Europe is the source of profound advances in understanding of disease biology, but there is a huge challenge in bringing this to patients. At the same time the public health challenges of dealing with emerging diseases and chronic conditions are intensifying and pharma is confronting the need to reduce risk, inefficiencies and costs in the development of new medicines.
“IMI is an environment where different stakeholders can interact and share risks. A lot of projects are too big for one single institution. When you compare how pharma collaborates now, compared to 15 years ago, it is a sea change,” Vannieuwenhuyse said.