/PhD application: System-Technology Co-optimization for enablement of MRAM-based Machine Learning

PhD application: System-Technology Co-optimization for enablement of MRAM-based Machine Learning

Research & development - Leuven | About a week ago

System architecture evaluation with respect to advanced technology and device roadmap: system technology co-optimization (STCO)

System-Technology Co-optimization for enablement of MRAM-based Machine Learning hardware 

To be started in the frame of the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 955671 SPEAR project


Host Institution 
Imec is a world-leading research and innovation hub in nanoelectronics and digital technologies. The STCO program at IMEC is aimed at shaping new system architectures built on novel technology developments. STCO will leverage essentially the three pillars of technology research: compute (advanced logic scaling) - store (emerging memory) - connect (interconnect topologies) for benchmarking application-specific architectures.  

Dr. Dwaipayan Biswas 


ML techniques such as DNNs have realized important breakthroughs in a myriad of application domains. The core operations in DNN are MVMs and in majority of use-cases, incur a dedicated training step for modelling, resulting in generating a set of parameters which are used in an inference step for classification/prediction outcomes. Training of DNNs have been traditionally carried out using software compute capabilities while a considerable research effort has been spent by the community to accelerate the inference on-chip for (near) real-time outcomes optimizing energy and accuracy. There is a need to look at optimized training procedures for reducing the energy footprint with minimal accuracy trade-off. Minimizing the data movement between the compute and memory blocks (non-Von Neuman trajectory) has had great success towards energy optimization, especially targeting the accelerated-inference landscape for ML applications. This has primarily been achieved through compute near/in memory (CnM/CiM) techniques. Devices based on standard technology as well as novel/emerging technology have been the main contributors to CiM/CnM paradigm, helping to optimize the core MVM operation. Both digital multiply-accumulate circuits and Kirchoff's law-based analogue domain processing have been explored to avoid costly memory fetches to an external memory. 

Dense non-volatile memories (NVM), with large resistance (MOhm) and narrow parameters distribution are a promising candidate, however typical write penalties for the standard STT variant of MRAM technology could be a bottleneck for their adoption. This is mitigated by making use of MRAM emerging writing concepts: SOT and VCMA. In addition, design solutions are proposed to create multi-level bit MTJs cells and are currently being prototyped for further demonstration. This project will explore Design-Technology Co-optimization (DTCO) using in-house SOT/VGSOT-MRAM technology-based ML hardware for optimizing ML Training related system performance for a dedicated application space. This will help to close the bottom-up loop connecting device characteristics to system power/performance metrics, enabling system technology co-optimization (STCO) for CnM-centric ML applications. 

 In this PhD you will: 

1) Understand device characteristics for binary and multi-level bit SOT-MTJ.  

2) DTCO: architecture-level choices which help to optimize the device knobs for yielding low-energy hardware solution for ML 

3) STCO: Exploration of novel compute near/in memory concepts using MRAM for system PPA impact estimation.   

The PhD is expected to develop a: i) device-level understanding enabling ML circuit level DTCO optimization, and ii) architecture-level choices in conjunction with MRAM technology, which help to optimize system PPA for ML training/inference. You are expected to participate in both circuit and architecture level optimization loops, enabling device benchmarking. 

[1] S. Cosemans et al., "Towards 10000TOPS/W DNN Inference with Analog in-Memory Computing – A Circuit Blueprint, Device Options and Requirements," 2019 IEEE International Electron Devices Meeting (IEDM), 2019.  

[2] J. Doevenspeck et al., "SOT-MRAM Based Analog in-Memory Computing for DNN Inference," 2020 IEEE Symposium on VLSI Technology, 2020. 

[3] G. Karunaratne et al. Robust high-dimensional memory-augmented neural networks. Nat Commun (2021). 


  • Master’s in Computer Engineering, Electrical Engineering 
  • Fundamental knowledge in neural networks 
  • Computer architecture, and circuit design skills 
  • Owing to the international nature of the research center, good verbal and written communication skills in English is a must. 
  • Although not mandatory for this position, following prior experience could be beneficial:  
  • Electrical device characterization 
  • Knowledge within the field of magnetism, spintronics, magnetic tunnel junctions 
  • MSCA-ITN eligibility requirements: Requirements

The 15 researchers that will be recruited to work on SPEAR

  • may be of any nationality and 
  • must be proficient in written and spoken English.

At the time of recruitment (i.e. contract starting date) the researchers:

  • must be early-stage researchers (ESR), meaning that they must be in the first four years of their research careers (full-time equivalent), i.e. our years or less must have
  • passed since they obtained the degree which entitles them to embark on a doctorate;
  • must not have been awarded a doctoral degree;
  • must not have resided or carried out their main activity (work, studies, etc.) in the country of the host organisation for more than 12 months in the last 3 years.

Planned secondments:  

  1. ETHZ, Pietro Gambardella, m17-m19, for specific SOT- and VCMA-based multi-level bit characterization.  
  2. NanOsc, Fredrik Magnusson, m31-m33, to evaluate hardware implementation methods of NanOsc concept through DTCO analysis. 

Registering University 
KU Leuven 

Deadline: Sunday, Sep 11 2022