سخنرانی‌های علمی
در دومین دوره‌ی کنفرانس ملی انفورماتیک ایران سخنرانی‌های علمی متعددی توسط اساتید برجسته‌ی دانشگاه‌های معتبر داخل و خارج از کشور ارائه خواهد شد. در اینجا با اساتید ارائه دهنده و موضوع ارائه‌ی آن‌ها آشنا می‌شوید:

دکتر سعید آخوندیان امیری
فایل ضبط شده‌ی سخنرانی

Exploiting the Structural Properties of Sparse Graphs in Distributed Models

In the distributed models of computation, every node of a network can be seen as a computational entity that may perform a certain algorithm. In the model under over consideration, nodes can communicate with each other in synchronous rounds. The communication is costly for several reasons: extra communication needs to consume more power and time, it incurs further load on links, also all the time we should ensure that the security of the communication is fulfilled, we may have to deal with re-ordering of packets. Hence, in the distributed models one of the main measures of complexity of algorithms is the number of communication rounds. Over the past 30 years, many distributed algorithms have been developed to reduce the round complexity in the distributed models. Many existing distributed algorithms are designed for general graphs, however, many practical networks have sparse structure: the structure of networks for major network providers in the United States have tiny treewidth, sensors embedded on the surface of a spaceship are forming an almost planar graph, or well known social networks are closely related to the classes of graphs of bounded arboricity. Given the sparsity of many real networks, one main concern nowadays is to design fast distributed algorithms when the underlying graph is sparse or admits nice structural properties. In this talk, we will go over the recent developments in this area. We will mainly focus on graph covering problems (e.g. dominating set, independent set, graph coloring). In particular, we take the dominating set problem as the case study to present several mechanisms that have been designed to develop faster distributed algorithms for sparse graphs.

Biography

Currently Postdoctoral researcher at University of Cologne, Dec. 2017- Dec. 2019: Postdoc at MPII, Oct. 2012- Dec. 2017: PhD Study at University of Berlin Ph.D. thesis title: Structural Graph Theory Meets Algorithms: Covering and Connectivity Problems in Networks Jan. 2012: Obtained master’s degree from University of Tehran in Computer Engineering, thesis title: On thin tree in bounded tree width graphs.


دکتر معصومه (آذین) ابراهیمی
فایل ضبط شده‌ی سخنرانی

Interconnection Networks for Deep Neural Network Accelerators

Deep neural networks (DNN) have led in significant improvements in many applications of artificial intelligence (AI) such as image classification and speech recognition. To support advanced DNN applications, networks must become larger and deeper, which demands a dramatic improvement in performance and power efficiency of computing platforms. In addition, due to the massive parallel processing, the performance of the current large-scale artificial neural network is often limited by the huge communication overheads and storage requirements. Flexible interconnection in the DNN accelerator brings the advantage of supporting different computing flows, which increases the computing flexibility. In this presentation, different interconnection methods for the DNN accelerator will be discussed. Then, a DNN accelerator design with a flexible interconnection network based on Networks-on-Chip will be explained.

Biography

Masoumeh (Azin) Ebrahimi received a PhD degree with honors from University of Turku, Finland in 2013 and MBA in 2015. She is currently a docent at KTH Royal Institute of Technology, Sweden and an adjunct professor at University of Turku, Finland. Her scientific work contains more than 100 publications including journal articles, conference papers, book chapters, edited proceedings, and edited special issue of journal. The majority of work has been performed on on-chip interconnection networks, fault-tolerant methods, and deep learning accelerators. She actively acts as a guest editor, organizer, and program chair in different workshops and conferences.


دکتر امیرعلی بنی‌اسدی
فایل ضبط شده‌ی سخنرانی

نردبان ارسال هوشمند

با پیشرفت تکنولوژی، روز به روز شاهد معرفی و عرضه تلویزیون‌ها و صفحه‌های نمایش ارزانتر و با رزولوشن (کیفیت تصویر) بالاتر از همیشه هستیم. هرچند که کماکان عامل محدودکننده کیفیت پخش ویدیو زنده و برخط محدودیت پهنای باند شبکه می‌باشد. در آینده نه چندان دور، کاربردهای دیگر ارسال ویدیو، مانند استفاده خودروهای بدون راننده از بستر پردازش ابری، نیاز به استانداردهای بالاتری در حوزه سرعت انتقال داده خواهند داشت و این محدودیت پهنای باند را بیش از پیش عیان خواهند کرد. امروزه، برخی از سازندگان صفحه‌های نمایش خانگی محصولات خود را مجهز به “واحد بهبود کیفیت” که مسئول حذف نویز (ناشی از فرآیند انکودینگ) و یا افزایش رزولوشن (جهت بهره‌برداری از تمام پیکسل‌های صفحه 8K/4K) است کرده‌اند. اما این اقدامات توسط شرکت‌هایی که خدمات پخش زنده ویدیو را ارائه می‌دهند نادیده گرفته می‌شود. در این کار، هدف ما ارائه بستری است که با استفاده از آن ارسال کننده‌ی ویدیو بتواند با در نظر گرفتن حضور و یا عدم حضور واحدهای بهبود کیفیت یا افزایش رزولوشن در سمت گیرنده، از پهنای باند مورد نیاز خود برای ارسال بکاهد بدون آنکه تاثیر قابل لمسی در کیفیت دریافتی بیننده گذاشته شود.

زندگی‌نامه

امیرعلی بنی‌اسدی دانش‌آموخته رشته مهندسی برق و کامپیوتر در دانشگاه‌های تهران (۱۳۷۱)، صنعتی‌شریف (۱۳۷۳) و نورث‌وسترن (۱۳۸۱) است. او استاد دانشگاه و مسوول گروه مهندسی برق دانشگاه ویکتوریا در کانادا است.


دکتر آرش توکل

An Overview of Fairness and Quality-of-Service Challenges in Modern Cloud-based Storage Systems

The storage subsystem is a fundamental performance bottleneck in running data-intensive enterprise and cloud-based applications. This problem has escalated and thus attracted many researchers over the past couple of years due to the recent system design, application, and technology trends that require more storage capacity, I/O bandwidth, performance predictability, and data availability out of the storage subsystem. In this talk, we first discuss major fairness and quality-of-service challenges of modern storage system architectures in the presence of resource sharing in the cloud-based infrastructures. We then discuss our proposed solutions to alleviate these challenges. We will touch on several key topics: 1) how to accurately model the new sophisticated features of modern SSDs such as the NVMe host interface protocols, 2) unfairness and quality-of-service issues of modern off-the-shelf Solid-State Drives (SSDs) in shared environments, 3) enabling fair and high-performance I/O request scheduling in modern NVMe SSDs, 4) deterministic device service periods, a new possibility in modern NVMe standards.

Biography

Arash is a principal software engineer at RepRisk AG, Zurich, Switzerland, where he is working on modern cloud-based data analytics infrastructures. His research interests are high-performance and scalable storage system design, architectures for big data analysis, and QoS and sharing issues in cloud-based platforms. Arash graduated with a Ph.D. from Sharif University of Technology, Tehran, Iran in 2015 and spent two years (2016-2018) as a Postdoc at Systems Group, ETH Zurich. Arash also has more than ten years of experience in high-performance computing and design issues of the modern scale-out data centers.


دکتر پویان جمشیدی
فایل ضبط شده‌ی سخنرانی

ATHENA: A Framework based on Diverse Weak Defenses for Building Adversarial Defense

The threat of the adversarial examples has inspired a sizable body of research on various defense techniques. With the assumption on the specific known attack(s), most of the existing defenses, although effective against particular attacks, can be circumvented under slightly different conditions, either a stronger adaptive adversary or in some cases even weak (but different) adversaries. The “arms race” between the attacks and defenses leads us to this central question: ``How can we, instead, design a defense, not as a technique, but as a framework that one can construct a specific defense considering the niche tradeoff space of robustness one may want to achieve as well as the cost one is willing to pay to achieve that level of robustness?`` To address this question, we propose ATHENA (Goddess of defense in Greek mythology) an extensible framework for building generic (and thus, broadly applicable) yet effective defenses against adversarial attacks. The design philosophy behind ATHENA is based on an ensemble of many diverse weak defenses, where each WD, the building blocks of the framework, is a machine learning classifier (e.g., DNN, SVM) that first applies a transformation on the original input and then produces an output for the transformed input. Ensembling diverse weak defenses can result in a robust defense against a variety of attacks and provide a tradeoff space, where one can build a more robust ensemble by adding more transformations or building an ensemble with lower overhead and cost by utilizing fewer transformations. ATHENA is a framework that enables building a customized defense, and our comprehensive study provides evidence that an ensemble of “many diverse weak defenses” provide such tradeoff space and has some viable potential properties: (1) applicability across multiple models, (2) applicability in different domains (image, voice, video), and (3) agnosticism to particular attacks.

More info: https://softsys4ai.github.io/athena/

Biography

Pooyan Jamshidi is an Assistant Professor at the University of South Carolina. He directs the AISys Lab, where he investigates the development of novel algorithmic and theoretically principled methods for machine learning systems. Prior to his current position, he was a research associate at Carnegie Mellon University and Imperial College London, where he primarily worked on transfer learning for performance understanding of highly-configurable systems including robotics and big data systems. Pooyan’s general research interests are at the intersection of systems/software and machine learning.

More info: http://pooyanjamshidi.github.io/


دکتر رضا حسینی فرح‌آبادی

Differential computations and scalability issues in modern streaming processing systems

Differential computation can be defined as a data processing paradigm to efficiently process large volumes of data and to quickly respond to "arbitrary changes" in input collections. There is not a comprehensive study to show the effectiveness of modern processing platforms for performing differential computation, particularly in presence of high arrival rate of streaming data within the short periods (e.g., if such incidents can cause serious degradation to the overall performance of underlying system). In this talk, we study the fundamental question of finding the core bottlenecks of modern data processing systems (Spark/Flink/Storm/Timely Dataflow) when performing differential computation across large scale cluster, and if we can develop advanced optimization mechanisms to allocate effectively available computing resources for such execution as the number of nodes in such a cluster grows.

Biography

M. Reza HoseinyF. received the BSc in computer engineering, the MSc in information technology and network engineering, both from the Sharif University of Technology, Tehran, Iran, (2005, 2007) and the PhD degree from the School of Information Technologies, the University of Sydney, Australia, (2015). He is currently acting as a research associate with the Centre for Distributed and High-Performance Computing, School of Computer Science, the University of Sydney, Sydney, Australia. His current research interests include parallel and distributed computing systems, streaming processing engines for big data, and control systems engineering.


دکتر مسعود دانش‌طلب
فایل ضبط شده‌ی سخنرانی

DeepMaker: Deep Learning Accelerator on Commercial Programmable Devices

Currently, the use of deep neural networks (DNN) in industrial systems is gaining significant momentum. However, optimization and deployment of DNN architectures for resource-constrained embedded programmable systems is a relatively costly process. In this talk, I will introduce the DeepMaker framework that optimizes the DNN architectures and generates synthesizable accelerators that can be used for different FPGA fabrics.

Biography

Masoud Daneshtalab (http://www.idt.mdh.se/~md/) is a full-Professor at Mälardalen University (MDH) in Sweden and an adjunct Professor at Tallinn University of Technology (TalTech) in Estonia. He is co-leading the Heterogeneous System research group (www.es.mdh.se/hero/). Since 2016 he is in Euromicro board of Director, a faculty member of the HiPEAC network, and a permanent associate editor of Elsevier MICPRO. His research interests include interconnection networks, hardware/software co-design, deep learning acceleration and evolutionary optimization. He has published 2 book, 8 book chapters, and over 200 refereed international journals and conference papers. He has served in Technical Program Committees of all major conferences in his area including DAC, NOCS, DATE, ASPDAC, ICCAD, HPCC, ReCoSoC, SBCCI, ESTIMedia, VLSI Design, ICA3PP, SOCC, VDAT, DSD, PDP, ICESS, Norchip, MCSoC, CADS, EUC, DTIS, NESEA, CASEMANS, NoCArc, etc. He has co-led several research projects including: SafeDeep, AutoDeep, DeepMaker, DESTINE, PROVIDENT, HERO, AGENT, CUBRIC, and µBrain with a total estimation of 11 MEuro.


دکتر حسین رحمانی

Semi-supervised 2D Pose Estimation in Videos

Existing approaches for 2D pose estimation in videos often require a large number of dense annotations, which are costly and labor intensive to acquire. In this work, we propose a semi-supervised pose estimation framework based on a generative pose transfer module to enable learning on temporally sparse annotated videos. Furthermore, considering the large amount of redundancy existing in videos, a Reinforcement Learning based agent is proposed to harness the more informative frames on the fly to best learn the pose estimator under our pose transfer schema. To our best knowledge, for the first time, the generative based pose transfer method is introduced to address the problem of semi-supervised pose estimation in videos. The model learned on sparsely annotated data outperforms state-of-the-art models trained with full annotations on two large-scale pose estimation datasets: Penn Action and Sub-JHMDB. Furthermore, the experimental results on the widely used multi-person pose estimation dataset, PoseTrack17, indicate the strong generalization of our method to the multi-person setup.

Biography

Hossein Rahmani received the B.Sc. degree in computer software engineering from the Isfahan University of Technology, Isfahan, Iran, in 2004, the MSc. degree in software engineering from Shahid Beheshti University, Tehran, Iran, in 2010, and the PhD degree from The University of Western Australia, in 2016. He has published several papers in top conferences and journals such as CVPR, ICCV, ECCV, and TPAMI, TIP, IJCV. He is currently an Associate Professor (Lecturer) with the School of Computing and Communications, Lancaster University. Before that, he was a Research Fellow at the School of Computer Science and Software Engineering, The University of Western Australia. His research interests include computer vision, action recognition, 3D shape analysis, and machine learning.


دکتر بهزاد سلامی
فایل ضبط شده‌ی سخنرانی

Energy-efficiency and Resilience Study for Reconfigurable Hardware for DNN use-case

Energy dissipation is the main concern for modern computing systems, especially for emerging data-intensive applications like Convolutional Neural Networks (CNNs). In this talk, I will elaborate on an effective technique of Undervolting to mitigate this issue, especially for reconfigurable devices. More specifically, we empirically evaluate an undervolting technique, i.e., underscaling the circuit supply voltage below the nominal level, to improve the power-efficiency of CNN accelerators mapped to Field Programmable Gate Arrays (FPGAs). Undervolting below a safe voltage level can lead to timing faults due to excessive circuit latency increase. We evaluate the reliability-power trade-off for such accelerators. Specifically, we experimentally study the reduced-voltage operation of multiple components of real FPGAs, characterize the corresponding reliability behavior of CNN accelerators, propose techniques to minimize the drawbacks of reduced-voltage operation, and combine undervolting with architectural CNN optimization techniques, i.e., quantization and pruning. We investigate the effect of environmental temperature on the reliability-power trade-off of such accelerators. We perform experiments on three identical samples of modern Xilinx ZCU102 FPGA platforms with five state-of-the-art image classification CNN benchmarks. This approach allows us to study the effects of our undervolting technique for both software and hardware variability. We achieve more than 3X power-efficiency (GOPs/W) gain via undervolting. 2.6X of this gain is the result of eliminating the voltage guardband region, i.e., the safe voltage region below the nominal level that is set by the FPGA vendor to ensure correct functionality in worst-case environmental and circuit conditions. 43% of the power-efficiency gain is due to further undervolting below the guardband, which comes at the cost of accuracy loss in the CNN accelerator. We evaluate an effective frequency underscaling technique that prevents this accuracy loss, and find that it reduces the power-efficiency gain from 43% to 25%. Finally, I will discuss the potential of such an Undervolting technique for more advanced system paradigms.

Biography

Behzad Salami is a researcher in the Computer Science (CS) department of Barcelona Supercomputing Center (BSC) and an affiliated member of the SAFARI Research Group in ETH Zurich. He received his Ph.D. degree in Computer Architecture with honors from Universitat Politècnica de Catalunya (UPC) in 2018, as well as master’s and bachelor’s degrees in Computer Engineering from Amirkabir University of Technology (AUT) and Iran University of Science and Technology (IUST), respectively. He had short three-months research visits to the University of Manchester (UK) in 2017 and IPM (Iran) in 2019. He worked on several EU-granted projects as a research scientist, including AXLE, LEGaTO, and EuroEXA. He is currently running a technology transfer project as the Principle Investigator (PI) in collaboration with the industry. He won several awards for his research, like the HiPEAC paper award, HiPEAC collaboration grant, Tetramax technology transfer grant, and OPRECOM grant. His research interests are Reconfigurable Computing, Processing-in-Memory, and Resilient and Energy-efficient Hardware Design.


دکتر امیر شیخها

Compilation and Code Optimization for Data Analytics

The trade-offs between the use of modern high-level and low-level programming languages in constructing complex software artifacts are well known. High-level languages allow for greater programmer productivity: abstraction and genericity allow for the same functionality to be implemented with significantly less code compared to low-level languages. However, the use of high-level languages comes at a performance cost: increased indirection due to abstraction, virtualization, and interpretation, and superfluous work, particularly in the form of temporary memory allocation and deallocation to support objects and encapsulation. As a result of this, the cost of high-level languages for performance-critical systems may seem prohibitive. The vision of "abstraction without regret" argues that it is possible to use high-level languages for building performance-critical systems that allow for both productivity and high performance, instead of trading off the former for the latter. In this talk, we realize this vision for building different types of data analytics systems. Our means of achieving this is by employing compilation. The goal is to compile away expensive language features -- to compile high-level code down to efficient low-level code.

Biography

Amir Shaikhha is an Assistant Professor (UK "Lecturer") in the School of Informatics at the University of Edinburgh. His research focuses on the design and implementation of data-analytics systems by using techniques from the databases, programming languages, compilers, and machine learning communities. Prior to that, he was a Departmental Lecturer at Oxford. He earned his Ph.D. from EPFL in 2018, for which he was awarded a Google Ph.D. Fellowship in structured data analysis, as well as a Ph.D. thesis distinction award.


دکتر مهدی طهوری
فایل ضبط شده‌ی سخنرانی

Test and Fault Tolerance of Neuromorphic Computing: Challenges and Opportunities

Memory-centric computing technologies and paradigms, including neuromorphic computing, are providing promising alternatives to tackle memory wall and power wall. Neuromorphic computing, for instance, is finding its way to efficiently implement deep learning and neural network based cognitive tasks by mimicking the human brain. In-memory computing based on emerging resistive memory technologies combine the storage and computation capabilities into the single device based on analog computing concepts. While there are many emerging technologies are being investigated for efficient implementation of such architectures and paradigms, there are several challenges related to failure modes, fault modeling, design for test and test generation of these technologies and architectures. This talk addresses design, test and fault tolerant aspects of neuromorphic computing technologies, circuits and architectures.

Biography

Mehdi Tahoori is currently a Full Professor and the Chair of Dependable Nano-Computing at Karlsruhe Institute of Technology, Germany. He received the B.S. degree in computer engineering from the Sharif University of Technology, Tehran, Iran, in 2000, and the M.S. and Ph.D. degrees in electrical engineering from Stanford University, Stanford, CA, in 2002 and 2003, respectively. In 2003, he was an Assistant Professor with the Department of Electrical and Computer Engineering, Northeastern University, where he became an Associate Professor in 2009. He has authored over 350 publications in major journals and conference proceedings on dependable computing and emerging nanotechnologies, and holds several US and European patents. He is currently the editor-in-chief of Microelectronic Reliability journal, associate editor for IEEE Design and Test Magazine, coordinating editor for Springer Journal of Electronic Testing (JETTA), and associate editor of IET Computers and Digital Techniques. He was the program chair of VLSI Test Symposiumand General Chair of European Test Symposium. Prof. Tahoori was a recipient of the National Science Foundation Early Faculty Development (CAREER) Award. He has received a number of best paper nominations and awards at various conferences and journals. He is a fellow of IEEE.


دکتر سمیه کوهی
فایل ضبط شده‌ی سخنرانی

شبکه عصبی نوری برای تحلیل داده های زیستی

بیوانفورماتیک، یک حوزه میان رشته ای است که روش ها و ابزارهای درک داده های بیولوژیکی را گسترش می دهد و از چندین حوزه، از جمله علوم کامپیوتر، آمار، ریاضیات و مهندسی برای شرح و تحلیل داده های بیولوژیکی بهره می برد. یکی از مهمترن مسائل در بیوانفورماتیک، مقایسه داده های زیستی با یکدیگر برای شناسایی مناطق مشابه و غیرمشابه بین آن ها است که از جمله کاربرد مهم آن تشخیص بیماری ژنتیکی است. روش های مقایسه مختلفی تاکنون پیشنهاد شده است که با محدودیت هایی چون سرعت پایین پردازش، قابلیت اطمینان پایین و هزینه اجرایی بالا مواجه هستند. یکی از روش هایی که برای غلبه بر محدودیت های فوق مطرح شده است، استفاده از پردازش نوری است که به دلیل قابلیت پردازش موازی و سرعت بالای نور نسبت به محاسبات الکتریکی، علاوه بر افزایش سرعت پردازش به مرتبه سرعت نور، باعث کاهش بسیار بالای مصرف انرژی و هزینه های محاسباتی نیز می شود. این ویژگی ها سبب شده است، ما بر روی ارائه ی یک معماری نوری با سرعت فوق العاده بالا، با بهره گیری از پردازش موازی نوری برای مقایسه رشته های DNA، تمرکز کرده و در این راه محدودیت های طراحی و چالش های معماری نوری را نیز مورد بررسی قرار دهیم.

زندگی‌نامه

دکتر سمیه کوهی استادیار دانشکده مهندسی کامپیوتر دانشگاه صنعتی شریف میباشد. ایشان مدرک کارشناسی دوگانه خود را در دو رشته مهندسی برق، گرایش الکترونیک، و مهندسی کامپیوتر، گرایش نرم افزار، در سال ۱۳۸۴ از دانشگاه صنعتی شریف اخذ نمود. در ادامه تحصیل در مقاطع تحصیلات تکمیلی، ایشان مدارک کارشناسی ارشد و دکترای خود را به ترتیب در سال های ۱۳۸۶ و ۱۳۹۱ در رشته مهندسی کامپیوتر، گرایش معماری کامپیوتر، از دانشگاه صنعنتی شریف اخذ کرد. ایشان تجربه یک دوره فرصت مطالعاتی یکساله در دانشگاه UCDavis, CA را در کارنامه دارد. دکتر کوهی پس از پایان تحصیل، در سال ۱۳۹۱ به عنوان استادیار دانشکده مهندسی کامپیوتر دانشگاه صنعتی شریف شروع به کار نمود. زمینه تحقیقاتی ایشان شامل طراحی و تکنیک ها و ساخت ابزارهای محاسبات، پردازش و انتقال داده نوری میباشد. برای این منظور به طور خاص روی داده های حجیم، ‌به عنوان مثال داده های زیستی، تمرکز شده است. شبکه های نوری روی تراشه، شبکه های نوری مراکز داده و کامپیوتر نوری از جمله زمینه های اصلی تحقیقاتی ایشان میباشد.


دکتر رسول مغاره
فایل ضبط شده‌ی سخنرانی

Is My Program Completely Bug-free?

In this talk, I will review three approaches to systematic program testing and at least one practical tool from each approach. The aim of this review is to compare these approaches in their ability to find difficult bugs or prove their non-existence.  In the first step, I will review Abstract Interpretation (AI) which is a path-insensitive and light-weight approach. Astree and Frama-C are two state-of-the-art tools that implement AI. AI-based tools are ideal for proving the nonexistence of bugs.  Secondly, I will review Dynamic Symbolic Execution (DSE) which has emerged as an important method to reason about programs, in both verification and testing. The main advantage of DSE over AI is that it is path-sensitive, hence, being able to find difficult bugs. I will review KLEE and TracerX as two state-of-the-art symbolic execution approaches.  In the third and final part of the talk, I will review Static Symbolic Execution a.k.a. Bounded Model Checking (BMC) which is an alternative to DSE. In BMC, a program and a verification problem are transformed into a constraint solving problem, and then the technology of SAT/SMT solvers are employed to prove/disprove the existence of the bug. I will be reviewing CBMC as an state-of-the-art BMC tool.

Biography

I received my B.Sc.(Hons, 1st class) from Shiraz University in 2005 and, my Ph.D. from National University of Singapore in 2017. I was a Postdoctoral Research Fellow at School of Computing, National University of Singapore from 2017-2020. My research interests are in programming languages and precise program analysis. I have been utilizing program verification methods, especially dynamic symbolic execution for precise resource analysis. The result of my researches has been published in RTAS, LCTES, ICSE, and FASE conferences. I was a research member working on the open-source TracerX symbolic execution engine. Recently, I have joined Huawei Heterogeneous Compiler Lab, Toronto, Canada as a Compiler Software Engineer.


دکتر محمدرضا موسوی

Doping Detection for Cyber-Physical Systems: A Data-Driven Approach

Doping is a piece of software or system showing a behviour that is in conflict with the user's intentions and requirements. Examples of doping include printer or mobile manufacturers locking in users and forcing them to use their original parts. Other examples are mobile malwares mining crypto-currencies and the infamous diesel emission scandal. Doping detection is a timely subject matter that has been recently addressed by different researchers and several techniques have been proposed for it. In this talk, we present an overview of the different notions of conformance testing for cyber-phsycial systems and make a case why it makes suitable tool for doping detection. We show how using conformance testing improves upon a state of the art method for detecting causality. We empirically evaluate our proposed technique on actual data from NOx diesel emission tests and we show that using conformance testing will lead to better use- and a more accurate interpretation of emission data, leading to better results in doping detection.

Biography

Mohammad Reza Mousavi holds the chair of Data-Oriented Software engineering at the School of Informatics, University of Leicester. Previously, he held positions in Sweden (Halmstad and Chalmers), The Netherlands (TU Eindhoven and TU Delft) and Iceland (Reykjavik). Mohammad obtained his PhD from Eindhoven University of Technology (2005) and Masters- and Bachelor degrees from Sharif University of Technology (2001 and 1999). He has led various research projects on testing at the foundational- and applied level, involving industrial sectors such as automotive and healthcare. His main research interest is in model-based testing, particularly of variability-intensive and cyber-physical systems.

More info: http://bit.ly/MohammadMousavi


دکتر حسن نجفی
فایل ضبط شده‌ی سخنرانی

Exact In-Memory Multiplication Based on Deterministic Stochastic Computing

Memristors offer the ability to both store and process data in memory, eliminating the overhead of data transfer between memory and processing unit. For data-intensive applications, developing efficient in-memory computing methods is under investigation. Stochastic computing (SC), a paradigm offering simple execution of complex operations, has been used for reliable and efficient multiplication of data in-memory. Current SC-based in-memory methods are incapable of producing accurate results. In this talk, we discuss the first accurate SC-based in-memory multiplier. For logical operations, we use Memristor-Aided Logic (MAGIC), and to generate bit-streams, we propose a novel method, which takes advantage of the intrinsic properties of memristors. The proposed design improves the speed and reduces the memory usage and energy consumption compared to the State-of-the- Art (SoA) accurate in-memory fixed-point and off-memory SC multipliers.

Biography

Dr. M. Hassan Najafi received the B.Sc. degree in Computer Engineering-Hardware from the University of Isfahan, Iran, the M.Sc. degree in Computer Architecture from the University of Tehran, Iran, and the Ph.D. degree in Electrical Engineering from the University of Minnesota, Twin Cities, USA, in 2011, 2014, and 2018, respectively. He is currently an Assistant Professor with the School of Computing and Informatics, University of Louisiana, LA, USA. His research interests include stochastic and unary computing, time-based processing, processing in memory, and computer architecture. Dr. Najafi was a recipient of the 2018 EDAA Outstanding Dissertation Award, the Best Poster Award at the 2019 DAC PhD Forum, and the Best Paper Award at the 35th IEEE International Conference on Computer Design (ICCD’17).





خانه
زمینه‌های علمی کنفرانس
ارسال مقاله
تاریخ‌های مهم
کمیته‌ برگزاری کنفرانس
سخنرانی‌های علمی
کارگاه‌ها
میزگرد
ثبت‌نام در کنفرانس
برنامه‌ی کنفرانس
محل برگزاری و اطلاعات تماس



پیوند‌های مرتبط












Copyright © 2020 IPM School of Computer Science