There are two main ways to study HD, and they are highly complementary. One is to study disease mechanisms using experimental biology. The other is to take a systems-level approach, using computer science and informatics to analyse ever-growing quantities of “poly-omic” (transcriptomic, proteomic, functional and other) data. The latter is the focus of this working group. We aim to generate in-silico HD models to help prioritise certain targets either as predictive markers of pathogenesis or for the developments of treatments.
Another challenge is to generate models that reduce data complexity down to a small number of biologically precise hypotheses and targets. The best models achieve the right balance between coverage and selectivity—that is, between highly descriptive source data and good discriminative power.
We are using network concepts and methods for unbiased analysis and integration of Huntington’s Disease datasets such as next generation sequencing data or large-scale gene-perturbation screens. We share our expertise with EHDN working groups such as the Genetic Modifiers and Biomarkers Working Groups.