Scholarships of up to $31,000 are available for Monash NICTA scholarships.
NICTA undertakes pure and applied use-inspired research that advances knowledge and produces technology that will benefit end users by solving real problems. Eligible projects need to align with NICTA's research interests and one of Monash University's identified flagship programs. Monash programs are in three key areas:
- Biomedical Imaging
- Computational Biology
Further details about the flagships are below, and information on applying for a scholarship is at the bottom of this page.
Computational Biology (Leader: Geoff Webb)
- Integrated computational technologies for determining the structure and dynamics of very large macromolecules: While techniques for determining the structures of small and medium sized proteins are well developed, they do not scale effectively to large proteins. As the determination of novel protein structures has high impact (eg many Nobel prizes have been awarded for it), this is an area of high impact. This flagship program seeks to create novel computational technologies for constructing models for large protein molecules from X-ray crystallography, small-angle x-ray scattering (SAXS), circular dichroism (CD), Cryo-electron Microscopy (Cryo-EM), molecular modelling and simulation data.
- System Biology and biological processes: Holistic understanding of biological processes and systems (abstracted as multi-layered interacting networks) plays a vital role in unravelling Nature's choreography of life. This program aims to develop methodologies for modelling such processes (e.g. Bayesian network), their graphical representation (e.g. SBGN) and analysis to study environment, epidemiology or systemic diseases (e.g. malaria, tuberculosis, cancer) for drug design.
- Computational Genomics - We develop novel computational solutions to distinguish mutations identified in cancer genome sequencing projects as "drivers" of pathogenesis from functionally inconsequential ones (i.e. the so called "passenger" mutations).
- Computational Proteomics - We aim to model and understand the structure and function of proteins and protein-protein interactions.
Optimisation (Leader - Maria Garcia de la Banda)
- High-level Modelling - The task of formulating optimisation problems in a way that they can be solved by software is called modelling. While the choice of model and associated solving technique is crucial to the quality of the solutions obtained for a given problem, making a good choice is very difficult and requires a significant amount of time and experience. Therefore, being able to model problems at a high-level of abstraction, to "plug and play" with a range of different solving techniques, and to iteratively analyse and improve the models is crucial for making optimisation technology accessible to a wide user base. This flagship program conducts research on high-level solver-independent modelling, developing expressive modelling languages, efficient translation and reformulation mechanisms, and powerful analysis, profiling, debugging, and visualisation techniques. This will give users the tools to model real-life, industrial optimisation problems in a concise, high-level language, to execute their models on all state-of-the-art solving technologies, and to analyse, visualise, understand and improve their models.
- Optimisation Solving Technology - The workhorse of any optimisation system is the underlying solving technology, which includes both the solvers (which deal with the constraints) and the search processes (which determine how the search space is explored). This flagship program develops novel techniques for improving constrained optimisation solving techniques. These include improving the efficiency and effectiveness of solving techniques by designing better algorithms and data structures used by solvers, by exploiting model structure such as symmetries and model equivalences during the search, and the development of new techniques for constrained optimization including hybrid techniques combining constraint programming approaches with SAT, linear and non-linear programming and local and heuristic search methods.
- Applications - Applying research results to real application is the best way to ensure the research is relevant and effective. This flagship program uses constrained optimization techniques in business and government applications including transport, sustainability, document and diagram layout, emergency management, and automatic software testing.
Biomedical Imaging (Leader: Gary Egan)
- Novel algorithms for computational imaging and analysis - Novel algorithms including compressed sensing and iterative reconstruction are being applied to improve and accelerate image acquisition and reconstruction, to enable studies with special characteristics such as very high time or sub-voxel spatial resolution. Morphological techniques (e.g. 3-d shapelets) are being applied for segmentation and classification of images, and non-parametric techniques such as manifold analysis for dimension reduction in large, multi-subject studies. Our current biomedical imaging projects require the development of advanced automatic feature segmentation techniques.
- Accelerating computational imaging and analysis - Graphical processing units (GPUs) - such as those powering the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE) - are being used to accelerate computational imaging and image analysis. Reductions in processing time up to 100 times make very large, multi-subject, multi-modal imaging studies feasible, open up new approaches to algorithm choice and application, and can assist in translating computationally-challenging algorithms to real time clinical use. Specific areas of application include linear and non-linear image registration, cortical surface extraction and microstructural tractography.
- Visualisation-led discovery and communication - The two great challenges of contemporary visualization are (i) managing very large, multi-dimensional datasets yet still delivering useable, interactive visualisations; and (ii) reducing the extensive filtering and censoring of data that is commonplace in published graphs and data projections, while retain meaning and context. Volume rendering of large datasets has recently been achieved using 2.5 teravoxel-per-second rendering on an intermediate-size GPU cluster. With collaborators from several disciplines we are developing non-commercial technologies for embedding 3-d scientific figures in PDF files, for application in the academic publishing industry, together with new visualisation paradigms for improving the comprehension and communication of neuroimaging data.
Applying for a scholarship
You must apply to (or be a current student) Monash University before submitting an application via NICTA. The University and NICTA will both assess applications. It is advised that you contact one of the Monash academics involved with NICTA to discuss a suitable project, prior to applying: