DATE Save the Date 17 to 19 April 2023


Dear DATE community,

We, the DATE Sponsors Committee (DSC) and the DATE Executive Committee (DEC), are deeply shocked and saddened by the tragedy currently unfolding in Ukraine, and we would like to express our full solidarity with all the people and families affected by the war.

Our thoughts also go out to everyone in Ukraine and Russia, whether they are directly or indirectly affected by the events, and we extend our deep sympathy.

We condemn Russia’s military action in Ukraine, which violates international law. And we call on the different governments to take immediate action to protect everyone in that country, particularly including its civilian population and people affiliated with its universities.

Now more than ever, our DATE community must promote our societal values (justice, freedom, respect, community, and responsibility) and confront this situation collectively and peacefully to end this nonsense war.

DATE Sponsors and Executive Committees.


Kindly note that all times on the virtual conference platform are displayed in the user's time zone.

The time zone for all times mentioned at the DATE website is CET – Central Europe Time (UTC+1).

W05 Cross-layer algorithm & circuit design for signal processing with special emphasis on communication systems

Start
Friday, 18 March 2022 09:00
End
Friday, 18 March 2022 14:40
Important Dates
    Final Program available!
    General Chair
    Norbert Wehn, University of Kaiserslautern, Germany
    Program Co-Chair
    Leibin Ni, Huawei Technologies Co., Ltd., China
    Program Co-Chair
    Christian Weis, University of Kaiserslautern, Germany
    Publicity Chair
    Raymond Leung, Huawei Technologies Co., Ltd., China

    Content/Context:

    The scaling and evolving of semiconductor manufacturing technologies are triggering intense interdisciplinary and cross-layer activities; they have the potential to provide many benefits, such as a much increased energy efficiency and resilience in the context of only partially reliable hardware circuit designs.

    Signal processing, particularly in the field of communication systems can largely benefit from these developments due to 1) rapidly increasing energy efficiency requirements as a consequence of the demand for higher data rates and 2) an inherent fault tolerance of the underlying signal processing algorithms. In the context of a communication system, the reliability and robustness requirements can largely vary depending on the considered target application: while they are rather relaxed for wireless communication systems due to a high overall acceptable error rate in the outcome of the processing, they are rather stringent in the field of optical communications that necessitates an operation at very low error rates. These specific characteristics and requirements foster cross-layer approaches to consider jointly the algorithm and the hardware circuit design.

    The robustness of the used hardware technology and processing architecture (classical signal processing, ML computing, or In-memory processing) together with the resilience of the applied algorithm determine the configuration and parameter of the complete application and as well the chosen algorithm. The further scaling of technology nodes and the slow-down of Moore’s law may force to deeply revisit existing and future signal processing and communication systems, and that will increase the potential need for paradigm changes in the design of such systems.

    This workshop aims at providing a forum to discuss challenges, trends, solutions and applications of these rapidly evolving cross-layer approaches for the algorithm and circuit design of communication and signal processing systems by gathering researchers and engineers from academia and industry; it also aims at creating a unique network of competence and experts in all aspects of cross-layer solutions and technologies including manufacturing reliability, architectures, design, algorithms, automation and test. The workshop will therefore give an opportunity for the contributors to share/discuss the state-of-the-art knowledge and their work in progress.

    The topics that will be discussed in this workshop include but are not limited to:

    • Cross-layer design approaches
    • Approximate computing in signal processing and communication systems
    • Algorithm design for communication systems and optimization for hardware implementation
    • ML and CNN computing for communication and signal processing systems
    • In-memory computing approaches for signal processing and communication systems

    Keynote speakers:

    1. Professor Andreas P. Burg, Telecommunications Circuits Laboratory, Ecole Polytechnique Federale de Lausanne (EPFL)
    2. Professor Stephan ten Brink, Director Institute of Telecommunications (INÜ), University of Stuttgart

    This workshop is supported by TU Kaiserslautern, Department of Electrical and Computer Engineering, Division of Microelectronic Systems Design
    Participants can register for the workshop free of charge via the online registration platform.

    TECHNICAL PROGRAM

    W05.1 Welcome Address

    Session Start
    Fri, 09:00
    Session End
    Fri, 09:10
    Speaker
    Norbert Wehn, TU Kaiserslautern, Germany

    W05.2 Keynote I and Invited Talk

    Session Start
    Fri, 09:10
    Session End
    Fri, 10:30
    Session chair
    Norbert Wehn, TU Kaiserslautern, Germany
    Presentations

    W05.2.1 Keynote: "On the Curse and the Beauty of Randomness for Providing Reliable Quality Guarantees with Unreliable Silicon"

    Start
    09:10
    End
    10:00
    Keynote Speaker
    Andreas P. Burg, Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland

    Abstract: Silicon implementations of complex algorithms (for communications and other applications) are burdened by extensive  savety margins to ensure 100% reliable operation. These margins limit voltage scaling at the cost of energy/power consumption and require conservative layout rules such as double-fins or the use of static memories for storage that are costly in area. "Approximate computing" or "computing on unreliable silicon" promotes the idea to compromise reliability and tolerate occasional errors or parameter variations for the benefit of area and power/energy. This idea is especially relevant for applications such as communications or machine learning, where systems are anyway tolerant to noise or apply only stochastic quality metrics such as BER, FER, PSNR, or MSE.
    Unfortunately, the silicon industry has so far refused to even remotely consider any idea that involves compromising 100% reliable operation.
    The good reason for this strict conservative, but costly (in area and power) approach is that the nature of errors (e.g., due to variations in the manufacturing process) are highly unpredictable and that it is almost impossible to predict the impact of a small error in the silicon on quality of results (e.g., on the error rate in a communication receiver). In fact, reliability issues lead to a huge quality spread between manufactured chips even for the most fault tolerant applications. However, chip manufacturers must provide reliable quality guarantees to their customers. While for example a slightly degraded, but consistent error rate performance or image quality is perfectly acceptable, it is not acceptable if some  circuits provide good quality, while others provide only poor quality.
    The key to successfully exploit quality margins for the benefit of area and power is therefore not necessarily to minimize errors, but to ensure that all manufactured chips provide the same quality level, even if they are subject to different more or less random errors.
    In this talk, we will explain this issue in detail by analyzing the nature of those errors that approximate computing promotes to tolerate. We argue that the randomness of errors is not only a curse, but can also be a beautiful characteristic that enables reliable quality guarantees. However, this beauty is not always naturally present in the silicon manufacturing process, but it can be restored. We will illustrate this with different examples, including an embedded system with a voltage-over-scaled SRAM and with the first silicon implementation of an LDPC decoder that overcomes the limitations of an unreliable memory.

    Bio: Andreas Burg (S'97-M'05) was born in Munich, Germany, in 1975. He received his Dipl.-Ing. degree from the Swiss Federal Institute of Technology (ETH) Zurich, Switzerland, in 2000, and the Dr. sc. techn. degree from the Integrated Systems Laboratory of ETH Zurich, in 2006.
    In 1998, he worked at Siemens Semiconductors, San Jose, CA. During his doctoral studies, he worked at Bell Labs Wireless Research for a total of one year. From 2006 to 2007, he was a postdoctoral researcher at the Integrated Systems Laboratory and at the Communication Theory Group of the ETH Zurich. In 2007 he co-founded Celestrius, an ETH-spinoff in the field of MIMO wireless communication, where he was responsible for the ASIC development as Director for VLSI. In January 2009, he joined ETH Zurich as SNF Assistant Professor and as head of the Signal Processing Circuits and Systems group at the Integrated Systems Laboratory. In January 2011, he joined the Ecole Polytechnique Federale de Lausanne (EPFL) where he is leading the Telecommunications Circuits Laboratory. He was promoted to Associate Professor with Tenure in June 2018.
    Mr. Burg has served on the TPC of various conferences on signal processing, communications, and VLSI. He was a TPC co-chair for VLSI-SoC 2012 and the TCP co-chair for ESSCIRC 2016 and SiPS 2017. He was a General Chair of ISLPED 2019 and he served as an Editor for the IEEE Transaction of Circuits and Systems in 2013 and on the Editorial board of the Springer Microelectronics Journal. He is currently an editor of the Springer Journal on Signal Processing Systems, the MDPI Journal on Low Power Electronics and its Applications, the IEEE Transactions on VLSI, and the IEEE Transactions on Signal Processing. He is also a member of the EURASIP SAT SPCN and the IEEE CAS-VSATC.

    W05.2.2 Invited Talk: "Implementation of multi-hundred-gigabit throughput optical FEC CoDec with non-refresh eDRAM"

    Start
    10:00
    End
    10:30
    Speaker
    Qinhui Huang, Huawei Technologies Co., Ltd., China
    Speaker
    Kechao Huang, Huawei Technologies Co., Ltd., China

    Abstract: Forward-error-correction codes (FECs) are essential elements in the field of optical communication to deliver an ultra-reliable transmission. In the last decade, communication engineers are unprecedentedly eager to power-efficient FECs due to the scaling down of Moore’s law and the increasing demand of data rate. Typically, the area and power consumption of nowadays high speed FECs are dominated by memories. Embedded DRAM (eDRAM) is a promising approach to deal with this issue due to the fewer transistors. Algorithm and circuit can be co-designed, and refresh module can be removed in such a domain-specific eDRAM. By using non-refresh eDRAM instead of conventional SRAM, significant power reduction and area saving can be achieved in high-speed FECs.
    This talk will address the eDRAM-based implementation issues on some mainstream optical FECs, focuses on staircase code and zipper code. The former is a widely adopted standardized FEC and the latter is an upgraded version of staircase code, which was proposed recently.

     

    W05.3 Coffee Break

    Session Start
    Fri, 10:30
    Session End
    Fri, 10:45

    W05.4 Keynote II and Invited Talk

    Session Start
    Fri, 10:45
    Session End
    Fri, 12:00
    Session chair
    Christian Weis, University of Kaiserslautern, Germany
    Presentations

    W05.4.1 Keynote: "Deep Learning Applications in Wireless Communications based on Distributed Massive MIMO Channel Sounding Data"

    Start
    10:45
    End
    11:30
    Keynote Speaker
    Stephan ten Brink, Institute of Telecommunications, University of Stuttgart, Germany

    Abstract: A distributed massive MIMO channel sounder for acquiring CSI datasets is presented. The measured data has several applications in the study of different machine learning algorithms. Each individual single-antenna receiver is completely autonomous, enabling arbitrary grouping into spatially distributed antenna deployments, and offering virtually unlimited scalability in the number of antennas. Some of the deep learning applications presented include absolute and relative user localization like “channel charting”, and CSI inference for UL/DL FDD massive MIMO operation.

    Bio: Stephan ten Brink has been a faculty member at the University of Stuttgart, Germany, since July 2013, where he is head of the Institute of Telecommunications. From 1995 to 1997 and 2000 to 2003, Dr. ten Brink was with Bell Laboratories in Holmdel, New Jersey, conducting research on multiple antenna systems. From July 2003 to March 2010, he was with Realtek Semiconductor Corp., Irvine, California, as Director of the wireless ASIC department, developing WLAN and UWB single chip MAC/PHY CMOS solutions. In April 2010 he returned to Bell Laboratories as Department Head of the Wireless Physical Layer Research Department in Stuttgart, Germany. Dr. ten Brink is an IEEE Fellow, and recipient and co-recipient of several awards, including the Vodafone Innovation Award, the IEEE Stephen O. Rice Paper Prize, and the IEEE Communications Society Leonard G. Abraham Prize for contributions to channel coding and signal detection for multiple-antenna systems. He is best known for his work on iterative decoding (EXIT charts), MIMO communications (soft sphere detection, massive MIMO), and deep learning applied to communications.

    W05.4.2 Invited Talk: "Communication-Aware Cross-Layer Codesign Strategy for Energy Efficient Machine Learning SoC"

    Start
    11:30
    End
    12:00
    Speaker
    Chixiao Chen, Fudan University, China

    Abstract: As the great success of artificial intelligence algorithms, machine learning SoC are becoming a significant type of high performance processors recently. However, the limited power budget of edge devices cannot support GPUs and intensive DRAM access. The talk will discuss multiple energy efficient codesign examples to avoid power hungry hardware. First, on-chip incremental learning is performed on an SoC without dedicated backpropagation computing, where algorithm-architecture codesign is involved. Second, low bit-width quantization schemes are applied to computing-in-memory based SoC, where algorithm-circuit codesign is investigated. Moreover, data flow optimization is mapped onto a multi-chiplet-module system, where architecture-package codesign is discussed.

     

    W05.5 Lunch Break

    Session Start
    Fri, 12:00
    Session End
    Fri, 13:00

    W05.6 Invited Talks: From Pass-Transistor-Logic to Computing-In-Memory

    Session Start
    Fri, 13:00
    Session End
    Fri, 14:30
    Session chair
    Leibin Ni, Huawei Technologies Co., Ltd., China
    Session chair
    Christian Weis, University of Kaiserslautern, Germany
    Presentations

    W05.6.1 Invited Talk I: "Research and Design of Pass Transistor Based Multipliers and their Design for Test for Convolutional Neural Network Computation"

    Start
    13:00
    End
    13:30
    Speaker
    Zhiyi Yu, Sun-Yat Sen University, Zhuhai, China
    Speaker
    Ningyuan Yin, Sun-Yat Sen University, Zhuhai, China

    Abstract: Convolutional Neural Networks (CNN) are featured with different bit widths at different layers and have been widely used in mobile and embedded applications. The implementation of a CNN may include multipliers which might consume large overheads and suffer from a high timing error rate due to their large delay. The Pass transistor logic (PTL) based multiplier is a promising solution to such issues. It uses less transistors. It also reduces the gates in the critical path and thus reduces the worst case delay. As a result, the timing error rate is reduced. In this talk, we present PTL based multipliers and the design for test (DFT). An error model is built to analyze the error rate and to help with DFT. According to the simulation results, compared to traditional CMOS based multiplier, the operation ability (measured by Joule per operation, J/OPS) of PTL multipliers could be reduced by over 20%.

     

    W05.6.2 Invited Talk II: "WTM2101: Computing-in-memory SoC"

    Start
    13:30
    End
    14:00
    Speaker
    Shaodi Wang, Zhicun (WITINMEM) Technology Co. Ltd., China

    Abstract: In this talk, we will introduce an ultra-low-power neural processing SoC chip with computing-in-memory technology. We have designed, fabricated, and tested chips based on nonvolatile floating-gate technology nodes. It simultaneously solves the data processing and communication bottlenecks in NNs. Furthermore, thanks to the nonvolatility of the floating-gate cell, the computing-in-memory macros can be powered down during the idle state, which saves leakage power for an IoT uses, e.g., for voice commands recognition. The chip supports multiple NNs including DNN, TDNN, and RNN for different applications.

     

    W05.6.3 Invited Talk III: "Implementation and performance analysis of computing-in-memory towards communication systems"

    Start
    14:00
    End
    14:30
    Speaker
    Zhihang Wu, Huawei Technologies Co., Ltd., China
    Speaker
    Leibin Ni, Huawei Technologies Co., Ltd., China

    Abstract: Computing-in-memory (CIM) is an emerging technique to solve the memory-wall bottleneck. It can reduce data movement between memory and processor, and have significant power reduction in neural network accelerators, especially in edge devices. Communication system is facing the power issue and heat dissipation problem while implementaing the DSP algorithm with ASIC. It will have a great impact if CIM technique can be applied in communication systems to improve the energy efficiency. The talk will discuss computing-in-memory technique for communication systems. Some DSP modules (such as FIR, MIMO and FECs) will be re-organized and mapped onto computing-in-memory units as examples.

     

    W05.7 Closing Notes

    Session Start
    Fri, 14:30
    Session End
    Fri, 14:40
    Speaker
    Christian Weis, University of Kaiserslautern, Germany
    Speaker
    Leibin Ni, Huawei Technologies Co., Ltd., China