Graduate Research Projects

As well as pursuing your own research interests you may be interested in considering one of the graduate research projects and scholarship opportunities offered by the Faculty of Information Technology. This list of projects is updated regularly. The Faculty encourages applications from eligible candidates to undertake research projects within our four broad Research Strengths and six research flagship areas.

Scholarships and funding available are indicated by or MGE.

All applications for candidature and scholarships; for Monash or Faculty of IT research scholarships, attached to these projects are made to the Monash Graduate Education (MGE).

Indian based students interested in studying at the IITB-Monash Research Academyclick here to view those projects.


Key Scholarships
MGE Monash Graduate Education (MGE) Scholarships. To be competitive for a central MGE or a Faculty of IT research scholarship, students require an Honours (H1) 80% + grade degree or the equivalent GPA 80% + standard.  Or qualifications, achievements and publications that demonstrate you are Honours H1 equivalent (H1E) standard.
Projects already have some funding available. Either tuition fee waivers (either full or partial), or stipends, or both. To be competitive for a research scholarship, students require an Honours (H1) 80% + grade degree or the equivalent GPA 80% + standard. Or qualifications, achievements and publications that demonstrate you are Honours H1 equivalent (H1E) standard. Contact the supervisor for more information.

 

Computational Biology

Project Title Supervisors

Next-generation protein structural comparison using information theory

MGE

Protein structural comparison is the most fundamental and widely used task in protein science. This research will develop the next-generation methods for structural comparison based on a rigorous information-theoretic framework. The results will have direct payoffs to the fields of structural biology and crystallography.

Development of big data-driven bioinformatics approaches and tools for cost-effective identification of potential post-translational modification types and sites in the human proteome

Through efficient knowledge discovery and machine learning of currently available large-scale experimental PTM data, this joint PhD proposal aims to develop and deliver machine learning approaches and bioinformatics tools that can be efficiently used to detect all potential functional PTM types and sites likely to occur in the whole human proteome.

Genomics of Disease

MGE

The cost effective next generation nucleotide sequencing technologies provide researchers with genomic data with remarkable precision, at unprecedented rates. For the first time in human history, this has resulted in an ability to investigate at high resolution the molecular mechanisms driving diseases such as cancer, among other heritable diseases. The focus of this project will be to identify the mutations and abberations in the host genome that drive the disease manifestation and the underlying evolutionary mechanisms responsible for the progression, mtastasis (in case of Cancer), and relapse during the life span of the disease.

Simulating bee foraging: how behavioural diversity in bees interacts with environmental conditions

Recent investigations on pollinator-plant interactions show that the learnt flower preferences of important pollinators like bees is dependent upon both flower temperature, and regional ambient temperatures. This shows that local and global changes in climatic conditions may directly influence how certain plants are pollinated. This project is producing computer simulations to reveal how climate change may directly influence flower evolution in the future, and how the management of environmentally and economically important plants can be modelled to inform reliable decision making about this important resource.

 

Data Systems and Cybersecurity

Project Title Supervisors

None currently available - contact Data Systems and Cybersecurity.

 

 

Data61

Project Title Supervisors
CSIRO’s Digital Productivity business unit and NICTA have joined forces to create digital powerhouse. Data61 is offering scholarships for PhD projects aligned with Data61 research and FIT research.

Data61 and Faculty of IT Researchers

 

Immersive Analytics

Project Title Supervisors

None currently available – contact Immersive Analytics

 

 

IT for Resilient Communities

Project Title Supervisors

Dynamic Descriptive Interfaces for Participatory Community Archival Networks

MGE

Recordkeeping and archiving are fundamental infrastructural components supporting community information, self-knowledge and memory needs. If developed in community-driven ways, they can contribute to resilient communities and cultures and pan- or trans-community endeavours, whereas traditional institutional approaches can often lead to community disconnection and disempowerment. Currently the archival records relating to many communities are fragmented, dispersed and dysfunctional. Spread across archival institutions, they appear as unmanaged, invisible and inaccessible from a community-centred perspective. Traditional archival description and access frameworks have focused on visibility and accessibility for scholars and researchers, causing many community-driven archives to resist the collecting activities of institutions wary of losing control and access to their records.

The Resilient Cultures Theme of the IT for Resilient Communities Flagship Research Program addresses R&D challenges relating to:

  • developing systems that capture, integrate, preserve, and make available community knowledge, and preserve cultural heritage
  • using innovative and leading edge technologies to build multimedia systems for visualising and animating culture, archiving oral memory, and helping communities to work with government and institutions on their own terms.

This project focuses on the metadata needs associated with:

  • building Sustainable Living Archives enabling long-term preservation, cross-generational transfer and interactive use of community knowledge, memory and culture
  • developing Information and Memory Infrastructure to support resilient communities and community-based scholarship
  • innovative use of IT to visualise and document cultural heritage.

The successful candidate will have an excellent academic track record in archival science. Strong IT skills and experience of working with communities would be highly advantageous.

 

Machine Learning

Project Title Supervisors

Improved Statistical Models of Document Corpora for Text Retrieval

MGE

At the heart of modern Web search systems lie relatively simple non-linear functions that rank documents by their similarity to query keywords. Despite the criticality of these text retrieval functions, our theoretical understanding of them is somewhat limited in comparison to that of other non-linear models employed in Machine Learning. The BM25 function for instance, arguably the most effective text retrieval function, is based on a relatively ad-hoc combination of empirically derived heuristics, with little in the way of clear theoretical underpinning. The aim of this project is to develop the missing theory through the application of sophisticated statistical techniques to the modelling of text documents from a large Web crawl corpus.

Recent advances in Language Modeling have shown that hierarchical non-parametric models such as Pitman-Yor Processes [3] can be used to appropriately model power-law behaviour in natural language. This behaviour exhibits itself as “word burstiness”, whereby occurrences of the same term are not evenly distributed across the corpus but rather tend to co-occur in a small number of documents. Correctly modelling burstiness is important for accurate retrieval since it determines the amount of information that term counts carry with respect to the probable meaning of each document. This project will investigate advanced statistical models capable of modelling observed variations in burstiness across terms in a document corpus. The investigation will result in better theoretical understanding of textual data and improved Web search algorithms.

Learning from big data

As data quantities continue to rapidly grow there is ever increasing demand for efficient and effective machine learning techniques for analysing very large datasets. This project is based on the hypothesis that large data quantity calls for quite different types of learning algorithms to small data quantity. Specifically, there is less need to control learning variance and greater need to minimise learning bias. This project will develop computationally efficient low bias learning algorithms suited for effective learning from big data.

Non-parametric Bayesian Models for Text

MGE

Increasingly, non-parametric Bayesian statistical modelling is being used to model the complex, structured aspects of text: grammar, part of speech, semantic content, etc. This is especially useful in semi-supervised contexts, where only limited tagging is available. This project will apply recent non-parametric methods to achieve semi-supervised learning in some text problem.

Statistical models of documents and text to support understanding

MGE

Various models of documents and text have been proposed that address different aspects of the content: the linguistic content (natural language and named entities), the document structure (sections, paragraphs), the topical content (issues, themes), and the argumentation and affective content (sentiment). Probabilistic models using latent variables give state of the art for some of these aspects, and for others they are near state of the art. This project will consider some subset of these aspects and build a combined probabilistic model that pushes the boundaries along one aspect. The project will explore the model's effects on measurable tasks such as information retrieval, language processing or document compression.

This is an abstract project in the sense that the initial outcome will be a stronger model of a document that can be subsequently used in other tasks. The scope of the project can be varied in that the focus could be extended to include the development of the task itself. The main task we consider is the interpretation or understanding of a large document or small corpus: how can we lay out the key aspects on a small website to aid human understanding.

 

Modelling, Optimisation and Visualisation

Project Title Supervisors

Effective Profiling of Combinatorial Optimisation Problems

Constraint Programming is specifically designed to solve combinatorial optimisation problems, that is, problems that require finding a combination of choices that satisfies a set of constraints and optimises an objective function. This is, for example, the case when looking for new planes, crews and times to replace a delayed flight, or when finding a production schedule in a manufacturing company that reduces waiting while maximising profits. Finding high quality solutions to combinatorial optimisation problems allows us to make the very most of limited resources. This is beneficial for our industry, our hospitals, our security and our environment, and is also a key to wiser investment, better engineering, and accelerated bioinformatics. However, designing programs that can solve optimisation problems effectively requires an iterative process that is often extremely challenging, time consuming and costly, particularly for large-scale problems.

This project will investigate information collected during the execution of a constraint program that can be analysed and summarised in such a way as to help users understand program performance. The results will help users to design scalable, efficient optimisation programs.

Real-time data acquisition for adaptive crowd modelling

MGE

This PhD project is part of a larger collaborative project between the faculties of IT, Engineering, and Science which develops methods for reliable multi-scale modeling of crowd dynamics for disaster prevention.

This project aims to develop methods that support planning and prediction of crowd movements based on data from past events as well as adaptive planning for live events as they unfold. This two-fold approach will facilitate superior risk management in urban design and improved emergency response planning. The key to achieving these aims are multi-scale modelling methods together with high performance simulation and optimisation algorithms specifically designed for these computational models.

The proposed PhD project works at the interface between adaptive optimization methods and real-time data acquisition for these. It will investigate suitable adaptive optimization methods and their data requirements, possible ways to acquire the required real-time crowd movement data via a variety of sources and on different timescales (mobile phone activity, traffic flow data, visual flow data, social media activity, etc), data fusion from such sources, and it will investigate the comparative utility of these data sources for effective and flexible disaster management.

Visualising Execution Profiles of Constraint Programs

Constraint programming is a rapidly advancing paradigm that aims to separate the specification of difficult problems from the algorithmic details of finding their solution. A constraint program consists of a declarative description of the problem as an objective function and constraints. The optimization of the objective function subject to the constraints is then carried out by a generic solver. Rapid progress in AI and solver techniques have made constraint programming a valuable tool for transport scheduling, valuing financial instruments, designing efficient energy networks, and many other difficult optimization problems.

In a perfect world, the programmer should be able to treat the solver as a 'black box'; allowing her to focus her attention on modelling the problem. However, there are times when insight into the internal state of the solver can help the programmer to modify the program to reduce the size of the search space and hence help the solver to more rapidly find a solution. This is particularly important for large-scale problems involving thousands of variables and constraints.

This project aims to provide visualization techniques that will allow us to peer inside the 'black box' and understand how program changes affect the internal state of the solver. We will need to develop sophisticated computer-graphics techniques for visually mapping large tree-like search spaces back to program sources in a compelling interactive way.

Constraint programming is a rapidly advancing paradigm that aims to separate the specification of difficult problems from the algorithmic details of finding their solution. A constraint program consists of a declarative description of the problem as an objective function and constraints. The optimization of the objective function subject to the constraints is then carried out by a generic solver. Rapid progress in AI and solver techniques have made constraint programming a valuable tool for transport scheduling, valuing financial instruments, designing efficient energy networks, and many other difficult optimization problems.

In a perfect world, the programmer should be able to treat the solver as a 'black box'; allowing her to focus her attention on modelling the problem. However, there are times when insight into the internal state of the solver can help the programmer to modify the program to reduce the size of the search space and hence help the solver to more rapidly find a solution. This is particularly important for large-scale problems involving thousands of variables and constraints.

This project aims to provide visualization techniques that will allow us to peer inside the 'black box' and understand how program changes affect the internal state of the solver. We will need to develop sophisticated computer-graphics techniques for visually mapping large tree-like search spaces back to program sources in a compelling interactive way.

Monash Infrastructure (MI) and the Institute of Railway Technology (IRT) funded project on three Rail Technology priorities. 1: Power and Propulsion, 2: Materials and Manufacturing, 3: Design, Modelling and Simulation

The scholarship is jointly funded by the Rail Manufacturing CRC and the Faculty IT. For more contact Faculty of IT - Manager, Graduate Student Services. Monash Infrastructure, together with the Institute of Rail Technology and the Rail Manufacturing CRC , is offering PhD scholarships, commencing in 2017. The aim is to develop PhD researchers in relevant disciplines to apply their skills and expertise to increase innovation in the rail manufacturing sector. Projects will align with the railway technology priorities of the Rail Manufacturing CRC:

Priority 1: Power and Propulsion

  • Lithium battery development for rail applications
  • Novel electric traction motor development
  • Advanced braking systems.

Priority 2: Materials and Manufacturing

  • Alloy and process development for rolling stock
  • Robotics and automated production processes
  • Light-weighting of rail componentry
  • Rolling stock maintenance cost reduction.

Priority 3: Design, Modelling and Simulation

  • Automated online health monitoring and condition monitoring systems for rail componentry
  • Data analysis and algorithm analysis for real-time rail applications
  • Computerised modelling tools for rolling stock fabrication.
 To be advised

 

Sensilab

Project Title Supervisors

Immersive 3d generative modelling as an educational tool

MGE

The project will develop an immersive environment for 3d generative modelling for children.

3d printing is now readily available and has proven to be a very successful tool to engage kids with information technology. This has been recognized by industry and academia and a number of 3d modelling environments for kids, even for primary school age, have appeared.

However, the ones that are conceptually simple are overly simplistic in their modelling approach. Simply said, kids can model in a much better and much more versatile way with Lego than with any virtual environment. One of the major hurdles for children is the conceptual mapping of the on-screen 2d world to the intended 3d object. Another one is to decompose an intended step of 3d manipulation into a sequence of 2d interactions. Surprisingly, none of the modelling environments aimed at children uses immersive 3d visualization and interaction, even though these technologies have now become inexpensive and readily available.

A second, related technology that has proven to be very successful at engaging kids is generative and parametric arts, i.e. the generation of art objects by writing programs that create them (or modify a seed object) rather than by direct manipulation of (virtual) materials. Educationally, the big attraction of this technology is that it bridges from pure use of IT to the world of programming. There is any number of educational tools available for generative arts in 2d, and some of these, in particular Processing, are extremely popular and in wide use. However, almost none are available for 3d modelling. The few that exist, like Grasshopper, are parts of complex professional tool suites for adults.

The project will bring these two areas together by creating an immersive environment for 3d generative modelling for children. This will open new educational opportunities for children and will generate new insights into how children interact with and conceptualise 3d worlds.

PHD by Practice Based  Research and Exegesis Scholarships