3rd IEEE Seasonal School on

Digital Processing of Visual Signals and Applications

Anfiteatro do Centro de Eventos do Instituto de Informática - UFRGS


Porto Alegre, Brazil

May 07 to 09, 2019

ORGANIZATION

General Chairs
Prof. Sergio Bampi and Prof. Bruno Zatt
Image Processing Track
Prof. Manuel Menezes de Oliveira and Prof. Jacob Scharcanski
Video Processing Track
Prof. Luciano Agostini and Prof. Marcelo Porto
Applications Track
Prof. Edison Pignaton de Freitas and Prof. Altamiro Susin
Posters Chairs
Prof. Cláudio M. Diniz and Prof. Mateus Grellert
Multimedia Track
Prof. Valter Roesler
IEEE CASS Liaison
Prof. José R. Azambuja
IEEE SPS Liaison
Prof. Jacob Scharcanski
Local Arrangements Chair
Prof. José R. Azambuja and Prof. Eduardo Gastal
Webmaster and IT Support
Vitor G. Lima

About

The DPVSA School aims to offer an ambiance of fruitful exchange of ideas and discussion of challenges of this area in the near future, bringing together experts from both academia and industry. There will be mostly conferences of experienced professors and researchers. The target audience is the community of Visual Signals Processing and Applications, either graduate and undergraduate students as well as industry or research professionals.
A huge amount of videos and images generated at every moment, together with the availability of high performance processing and communication resources bloom out in plenty of nice and useful applications. These fields constitute an interdisciplinary domain that will hopefully add up synergistically to enforce everyone's work.


Image and Video Processing

Image acquisition and processing
Segmentation and features extraction
Simultaneous Location and Mapping; 3D object detection
Image and Video compression and coding
Algorithms and architectures for video processing

Applications

Medical imaging and Telemedicine
Advanced driver assistance systems
Robotics and Autonomous Guided Vehicles
Image Processing for Unmanned Areal Vehicles


PROGRAM

07 May
Click in the correspondent topic for more details


  • Palestrante: Mateus Grellert (UCPEL)
  • Resumo: The increase of data that is constantly produced and consumed on the internet combined with the recent discoveries in Deep Learning have shown that Machine Learning is a flexible and efficient approach to handle computing problems that cannot be easily solved by algorithmically. In this talk, the fundamental concepts of Machine Learning will be presented, focusing on supervised-learning techniques. In a following step, Neural Networks and Deep Learning will be briefly discussed, giving highlight to the milestones that contributed to the popularization of this approach in current research. Solutions from the literature that employ Machine Learning on Image and Video Processing will be then presented: Support Vector Machines to reduce HEVC encoding complexity; complexity reduction of transrating systems using Decision Trees and Random Forests; and Deep Neural Networks to improve image quality.
  • Bio: Prof. Mateus Grellert received the M.Sc. degree in Computer Science from the Federal University of Rio Grande do Sul (UFRGS), Brazil, in 2014, and the Ph.D. degree at the same University in 2018. He is an Assistant Professor at the Catholic University of Pelotas, Brazil. He is also a member of the Joint Group on Audio, Image, Multimedia and Hypermedia Coding (CE-021:000.029) from the Brazilian Association of Norms and Techniques (ABNT). He has been doing research in embedded systems solutions for more than 8 years. His published works include topics like reconfigurable architectures, memory-aware and energy-aware design, and efficient video-coding systems. His current research interests involve efficient video-coding algorithms using Machine Learning techniques and architectures for video-coding systems in constrained, embedded applications.
  • Palestrante: Fernando Osório (USP-Scarlos)
  • Resumo: This talk intends to present Computer Vision and Machine Learning techniques adopted in the implementations and applications of intelligent and autonomous vehicles. The LRM Lab (Mobile Robotics Research Laboratory / ICMC – USP) completed 10 years of design, implementation and field testing of autonomous vehicles, where the 1st Driverless Vehicle (CaRINA I) was operational in 2010, and it was based on a Neural Network Vision System. Other research platforms were the CaRINA II - the 1st vehicle to navigate autonomously in urban spaces in Latin America, and the Autonomous “Smart Truck” developed in a partnership with SCANIA Latin-America. We will discuss about the evolution of these intelligent systems, starting using simple sensors like 2D monocular cameras, and going up to 3D sensors as Velodyne LIDAR, Radar and Stereo Cameras. Our group main research is presently focused on using 3D Computer Vision Systems (Depth data) and Machine Learning/Deep Learning Techniques for implementing Intelligent Vehicular Systems, including tasks as the analysis of the driver behavior (e.g. cellphone usage), detection of obstacles, and detection/analysis of road sign plates and special signs (e.g. lane or road blocking). In our research group we belief that it is time to move from 2D images, used in most Deep Learning based systems, to 3D/Depth Deep Learning systems and 3D/Depth Computer Vision systems. Examples of research developments and applications will be presented, and also, we will discuss how other research groups can also participate in the research development in this field: using virtual simulators (e.g. Carla and V-REP), or, implementing mini autonomous cars with similar functionalities of the real ones.
  • Bio: Professor at the University of Sao Paulo (ICMC/USP São Carlos), Brazil - Teaching at the Undergrad and Graduate Studies in Computer Science (Researcher/Professor and Course Coordinator at Computer Engineering Undergrad course, Researcher/Professor at Graduate Programs PPG MECAI and PPG CCMC - USP). Prof. Osório received his PhD in Computer Science (“Informatique: Systemes et Communication”) at the Institut National Polytechnique de Grenoble INPG/IMAG (France, 1998), and his MSc in Computer Science at UFRGS - PGCC (Brazil, 1991). Prof. Osório is member of ACM, IEEE and SBC(Brazilian) Societies, board member of the Brazilian Robotics Council of the SBC (CE-R, 2008-2018) and Coordinator/President of the Brazilian Robotics Council (2014-2018). He was board member of the Brazilian Artificial Intelligence Council of the SBC (CEIA, 2008-2010), IEEE Computer Society DVP-LA Member (Distinguished Visitor Program Member - Latin America 2007-2009). Dr. Osório organized and participated as the PC General Chair of the IEEE LARS-SBR 2014 and LARS-SBR 2015 Conferences (IEEE RAS Latin-American Robotics Symposium on Robotics and Brazilian Robotics Symposium) and as member of the steering committee of IEEE LARS-SBR 2016, 2017 and 2018; Chair of the ACM SAC Robot Track (Intelligent Robotic Systems, from 2009 to 2013), Chair of the IEEE ICIT 2010 - Session on Mobile Robotics. He coordinates the Computer Engineering Maker-Space (Espaço EngComp 2016-present), and he shares the coordination of the LRM (Mobile Robotics Research Lab. - ICMC/USP) with prof. Denis Wolf, also integrates the board of directors of the Center for Robotics - CRob-USP/SC, and integrates the representative board of the IEEE RAS South Chapter Brazil. Research activities: He was member and research group manager of the INCT-SEC (National Institute of Science and Technology for Embedded Critical Systems – 2009 to 2014). Dr. Osório was one of the leading researchers of the CaRINA Autonomous Vehicles Project at LRM/USP, designing, implementing and testing driverless vehicles for the last 10 years, including in the Autonomous Truck Project with the partners from the LRM-ICMC, CRob/USP and Scania Latin-America. He published recently the book “Robótica Móvel” and has more than 200 papers published in conferences and journals, organized 10 books and published 15 book chapters. His main research interests are: Mobile Robotics, Machine Learning, Intelligent Systems and Vehicles, Computer Vision and Virtual Simulation.
  • Palestrante: Gustavo Ilha (UFRGS)
  • Resumo: MPVue is a Heterogeneous Distributed Memory MPSoC platform that targets the computer vision field. The system is build aiming to compromise high performance required for computer vision algorithms with the low power consumption desired on embedded systems. To achieve this, a structure of a 2-D Mesh Network-on-Chip was build connecting several RISC-V based cores. The software architecture was developed using the Service Oriented Programming model to enable parallel execution of computer vision algorithms. In this presentation details of the system implementation are explored and some results are presented.
  • Bio: Gustavo Ilha received the Electrical Engineer degree from UFRGS in 2006 and M. Sc. degree from UFRGS in 2010, where he is currently pursuing the Ph.D. degree. In 2013, he joined CEITEC S.A as an Application Engineer. His main areas of research interest are microelectronics and computer vision.
  • Palestrante: Vladimir Afonso (IFSul, UFPel, and UFRGS)
  • Resumo: The popularization of multimedia services has pushed forward the development of 2D/3D video-capable embedded mobile devices. In this scenario, 3D-video systems based on the exhibition of multiple views at the same time are also expected, including systems capable of dealing with high and ultra-high resolutions. To meet this demand and the huge amount of data to be processed and stored, an extension for the HEVC (High Efficiency Video Coding) standard targeting three-dimensional video coding was developed by the ISO/IEC and ITU-T experts. This 3D extension is called 3D-HEVC and it uses the MVD (Multi-view plus Depth) concept which associates a depth-map with each texture frame for each view that composes the video sequence. Because of that, 3D-HEVC defines several novel coding tools to make possible the 3D-video processing with multiple views on increasing resolutions under this novel perspective. As a result, the 3D-HEVC extension requires a high computational effort. Since 2D/3D video-capable embedded mobile devices require efficient energy/memory-management strategies to deal with severe memory/processing requirements and limited energy supply, the development of energy and memory-aware systems targeting the 3D-HEVC is essential. This talk aims to present a brief of the 3D-HEVC extension and the challenges related to the 3D-video coding under an MVD approach. Also, some dedicated hardware solutions proposed for high-throughput and energy-efficient 3D-HEVC applications will be presented and discussed.
  • Bio: Vladimir Afonso received the B.S. degree in industrial automation from the Sul-rio-grandense Federal Institute (IFSul), Pelotas, RS, Brazil, in 2008 and the M.S. degree in computer science from the Federal University of Pelotas (UFPel), RS, Brazil, in 2013. He is a Ph.D. candidate in microelectronics at the Federal University of Rio Grande do Sul (UFRGS), Porto Alegre, RS, Brazil.He is a Professor since 2009 at the IFSul and a researcher with the Video Technology Research Group (ViTech) at the UFPel. His research interests include 2D/3D video-coding algorithms, and FPGA-based and VLSI designs for video coding.
  • Palestrante: Maik Basso - (UFRGS)
  • Resumo: In precision agriculture, activities such as selective spraying of agrochemicals are essential to maintaining high productivity and quality of agricultural products. The use of unmanned aerial vehicles (UAVs) to perform those activities reduces soil compaction, compared to the use of heavy machinery, and helps to reduce the waste of agrochemicals through a punctual and self-regulating application. In this presentation, it is demonstrated a complete guidance system for UAVs - software and hardware - based on image processing techniques. The guidance algorithm is the main component of the software flow, and it is sub-divided in two parts. Firstly, there is the Crop Row Detection algorithm, which is responsible for the correct identification of the crop rows. The second algorithm is the Line Filter, being responsible for generating the guidance parameters, which are sent to the flight controller. Based on field experiments, the algorithm achieved a detection rate of 100% of the crop rows having image resolutions above 320 × 240. The system performance was evaluated based on laboratory experiments and reached 31.22 FPS for images having the lowest resolution (320 × 240) and 1.63 FPS for images having the highest resolution (1920 × 1080). The main contributions of this project are the design and the development of an embedded guidance system. Other contributions are: the proposal of a filter for image pretreatment; a filter to remove false positive lines; and an algorithm for generating the guidance parameters based on the detected crop rows.
  • Bio: MSc. Maik Basso is a Ph.D. student in the Graduate Program in Electrical Engineering (PPGEE) at the Federal University of Rio Grande do Sul (UFRGS), Brazil. He has an MSc. degree in Electrical Engineering, with emphasis on Systems of Automation (UFRGS), 2018. He is a bachelor in Information Systems from the Federal University of Santa Maria (UFSM), 2015. His current research interests include the following topics: embedded systems for autonomous (multi) unmanned aerial vehicles (UAVs); image processing; artificial intelligence; WEB technologies and WEB development.

08 May
Click in the correspondent topic for more details


  • Palestrante: Marcelo Porto (UFPEL)
  • Resumo: This talk aims to present dedicated hardware design for video compression in mobile devices. Dedicated hardware design is crucial for making possible the manipulation of high-resolution videos in mobile devices, mainly due to the need for high performance in real-time processing and the energy consumption constraints. The talk will discuss some basic concepts about video compression considering the common coding flow used by all current video coding standards, as the H.264, HEVC and AV1. The focus of this talk will be the main condign tools of those standards, as the Inter and Intra-Frame Prediction and the residual coding loop, composed by the forward and inverse Transform and Quatization steps. Algorithm simplifications and optimizations focused on efficient hardware design targeting performance improvement and/or área in chip/consumption reduction will be presented and discussed. This talk will also present the developed hardware architectures which explore the algorithm optimizations, as well as the main results for ASIC synthesis. Moreover, some aspects related to memory access will be also discussed, as memory bandwidth and energy consumption reduction. In the end, some aspects of the new video coding standards, as the AV1 and the VVC will be presented, as well as its main challenges for dedicated hardware design.
  • Bio: Marcelo Porto received the B.S. degree in computer science from the Federal University of Pelotas (UFPel), RS, Brazil, in 2006 and the M.S. and Ph.D. degrees, also in computer science, from the Federal University of Rio Grande do Sul (UFRGS), Porto Alegre, RS, Brazil, in 2008 and 2012, respectively. He is Professor since 2009 at UFPel, a permanent member of the Graduate Program in Computer Science (PPGC), and researcher of the Video Technology Research Group (ViTech) and the Group of Architectures and Integrated Circuits (GACI). His research interests include video coding, motion estimation algorithms, FPGA-based design and VLSI design for video coding.
  • Palestrante: Ricardo Queiroz (UnB)
  • Resumo: We will discuss how efforts in finding a new dimension in communications led to telepresence systems and eventually to holoportation, the representation of a 3D solid body at a remote location. An effort will be made to characterize this evolution and to discuss the state of the art in the compression of point clouds which is the technique currently used to transmit 3D content in real-time communications.
  • Bio: Dr. Ricardo L. de Queiroz received the Engineer degree from Universidade de Brasilia , Brazil, in 1987, the M.Sc. degree from Universidade Estadual de Campinas, Brazil, in 1990, and the Ph.D. degree from The University of Texas at Arlington , in 1994, all in Electrical Engineering. In 1990-1991, he was with the DSP research group at Universidade de Brasilia, as a research associate. He joined Xerox Corp. in 1994, where he was a member of the research staff until 2002. In 2000-2001 he was also an Adjunct Faculty at the Rochester Institute of Technology. He joined the Electrical Engineering Department at Universidade de Brasilia in 2003. In 2010, he became a Full (Titular) Professor at the Computer Science Department at Universidade de Brasilia. During 2015 he has been a Visiting Professor at the University of Washington, in Seattle. Dr. de Queiroz has published extensively in Journals and conferences and contributed chapters to books as well. He also holds 46 issued patents. He is a past elected member of the IEEE Signal Processing Society's Multimedia Signal Processing (MMSP) and the Image, Video and Multidimensional Signal Processing (IVMSP) Technical Committees. He is a an editor for IEEE Transactions on Image Processing and a past editor for the EURASIP Journal on Image and Video Processing, IEEE Signal Processing Letters, and IEEE Transactions on Circuits and Systems for Video Technology. He has been appointed an IEEE Signal Processing Society Distinguished Lecturer for the 2011-2012 term. Dr. de Queiroz has been actively involved with IEEE Signal Processing Society chapters in Brazil and in the US. He was the General Chair of ISCAS'2011, MMSP'2009, and SBrT'2012. He was also part of the organizing committee of many SPS flagship conferences. His research interests include image and video compression, point cloud compression, multirate signal processing, and color imaging. Dr. de Queiroz is a Fellow of IEEE and a member of the Brazilian Telecommunications Society.
  • Palestrante: Manuel Oliveira (UFRGS)
  • Resumo: Deconvolution is a fundamental tool for many imaging applications ranging from microscopy to astronomy. In this talk, I will present efficient deconvolution techniques tailored for two important computational photography applications: estimating color and depth from a single photograph, and motion deblurring from camera shake. For the first, I will describe a coded-aperture method based on a family of masks obtained as the convolution of one "hole" with a structural component consisting of an arrangement of Dirac delta functions. We call this arrangement of delta functions the structural component of the mask, and use it to efficiently encode scene distance information. I will then show how one can design well-conditioned masks for which deconvolution can be efficiently performed by inverse filtering. I will demonstrate the effectiveness of this approach by constructing a mask for distance coding and using it to recover color and depth information from single photographs. This lends to significant speedup, extended range, and higher depth resolution compared to previous approaches. For the second application, I will present an efficient technique for high-quality non-blind deconvolution based on the use of sparse adaptive priors. Despite its ill-posed nature, I will show how to model the non-blind deconvolution problem as a linear system, which is solved in the frequency domain. This clean formulation lends to a simple and efficient implementation, which is faster and whose results tend to have higher peak signal-to-noise ratio than previous methods.
  • Bio: Manuel M. Oliveira is an Associate Professor of Computer Science at the Federal University of Rio Grande do Sul (UFRGS), in Brazil. He received his PhD from the University of North Carolina at Chapel Hill, in 2000. Before joining UFRGS in 2002, he was an Assistant Professor of Computer Science at the State University of New York at Stony Brook (2000 to 2002). In the 2009-2010 academic year, he was a Visiting Associate Professor at the MIT Media Lab. His research interests cover most aspects of computer graphics, but especially the frontiers among graphics, image processing, and vision (both human and machine). In these areas, he has contributed a variety of techniques including relief texture mapping, real-time filtering in high-dimensional spaces, efficient algorithms for Hough transform, new physiologically-based models for color perception and pupil-light reflex, and novel interactive techniques for measuring visual acuity. His work has been marked by a quest for solutions that produce high-quality results in real time. Manuel was program co-chair of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2010 (I3D 2010), and general co-chair of ACM I3D 2009. He is program co-chair of the Latin American Symposium on Computer Graphics, Virtual Reality and Image Processing (CLEI 2014), and a member of the CIE (International Commission on Illumination) Technical Committee TC1-89 “Enhancement of Images for Colour Defective Observers". He has also served as program co-chair of the WSCG 2013 and SIBGRAPI 2006. He received the ACM Recognition of Service Award in 2009 and 2010.
  • Palestrante: Eliezer Bernart (UFRGS)
  • Resumo: In this presentation, some challenges in the field of medical image processing and interpretation are discussed, such as the identification of cancer, tumors and other diseases that often appear in medical images. We will discuss and present new trends in this field, such as medical image processing via deep learning. Also, some open problems in the field will be discussed, such as dealing with small datasets and the quest for explainable interpretation results.
  • Bio: Eliezer Bernart is a Substitute Professor at the Federal University of Health Sciences of Porto Alegre (UFCSPA), and Ph.D. Student at Federal University of Rio Grande do Sul (Institute of Informatics). He holds an MSc in Computer Science and a B.Eng (Hons) in Computer Engineering. His main research interests are in image processing, computer vision, and pattern recognition. Currently, his research focus is in medical image processing and the development of new methods for classification exploring sparse representation algorithms.
  • Palestrante: PONFAC - Daniel Correa
  • Resumo: The field of computer vision is currently growing in Brazil and the country is achieving worldwide prominence with the development of innovative industrial projects that have been deeply improving the automation of production processes and quality control. In this presentation, some cases will be exposed that illustrate how computer vision technology is being applied to the industrial segment in Brazil.
  • Bio: MSc. Daniel Correa is one of Ponfac's engineers responsible for research and innovation applied to the development of industrial solutions. He has an MSc. degree in Electrical Engineering, with emphasis in Control Systems (UFRGS, 2015) and a bachelor in Electrical Engineering (UFRGS, 2010). His current research interests include: Embedded machine vision systems; Machine Learning; Smart Cities; Internet of Things.

09 May
Click in the correspondent topic for more details


  • Palestrante: Márcio Costa (UFSC)
  • Resumo: Partial or total hearing limitation is a problem that affects 10 million of Brazilian citizens, as well as about 5% of the whole Human population. Hearing impairment may lead to social isolation, professional limitations and risks to the health, decreasing the quality of life and economic productivity. Despite some important initiatives, there is still a lot to be done for establishing the basis of our national industry in this area. This talk presents an overview about signal processing methods for assistive hearing. Two important hearing compensating devices are approached here: hearing aids, and cochlear implants. We will start describing the anatomic and physiologic characteristics of the Human ear, under the engineering point of view, and then we will concentrate on requirements and specifications for hearing compensation. Following, we will introduce some subsystems that compose each application under view, such as: beamforming, filterbanks, adaptive filters, active noise control, optimum filtering, and time-frequency filtering. The main aim of this talk is to demystify both hearing aid and cochlear implant architectures and processing methods, showing how well-known and advanced signal processing techniques are associated to constitute practical commercial systems. We conclude this talk showing that assistive hearing is a very attractive area for signal processing and microelectronic engineers, providing opportunities for research, technical innovation and economic development.
  • Bio: Márcio H. Costa received the B.E.E. degree from Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, Brazil, in 1991; the M.Sc. degree in biomedical engineering from Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil, in 1994; and the Dr. degree in electrical engineering from Universidade Federal de Santa Catarina (UFSC), Florianópolis, Brazil, in 2001. From 1994 to 2004 he was with the Department of Electrical Engineering, Biomedical Engineering Group, Universidade Católica de Pelotas. Since 2004 he is with the Department of Electrical and Electronic Engineering at Universidade Federal de Santa Catarina. In 2013, he was a Visiting Researcher with the Communications and Signal Processing Research Group, Imperial College London. His present research interests are in biomedical signal processing, hearing aids, cochlear implants, linear and nonlinear adaptive filters, adaptive inverse control and active noise and vibration control.
  • Palestrante: Edson Mintsu (UnB)
  • Resumo: This lecture presents a set of signal processing solutions resulted from cooperation between industry and Universidade de Brasília (UnB). Here some research and development problems will be presented within the solutions developed in UnB with the cooperation of Hewlett-Packard, Eletronorte, Petrobrás, Toledo do Brasil, Google, Samsung, etc. Several technologies in different topics were developed, such as image and video compression, energy quality, acoustics, signal processing, voice compression, etc.
  • Bio: Edson Mintsu Hung is an assistant professor of Electrical Engineering at Universidade de Brasilia and also a former head of the Electronic Engineering Faculty in 2014-2015. Mintsu received the Eng., M.Sc., and D.Sc. degrees from the Dept. of Electrical Engineering at Universidade de Brasilia, Brazil in 2004, 2007 and 2012, respectively. He got scholarships from the Research and Development Center for the Security of Communication (CEPESC) of the Brazilian Intelligence Agency (ABIN) (2004-2007) and Hewlett-Packard (2005-2011). He also contributed to the development of a set of transcoders for the Brazilian Digital TV Broadcast System (SBTVD) (2005-2006); a software to analyze parameters of the electrical energy quality for Eletronorte's systems (2006-2008); vocoders (iLBC, ACELP, CELP, G.723, and G.729) implementations to IP phones (2007-2008); tools for instrumentation and processing of weighing systems with Toledo do Brasil (2014-now); tools for sonar remote sensing for Petrobrás (2016-2017); motion compensation tool for Google (2016); image and video codec for Samsung (2019-). In 2006, Mintsu developed an embryonary work on wedge partition, which was implemented in the AV1 Google’s open codec. Three of his papers were awarded Top 10% in IEEE International Conference on Image Processing 2014 and 2015 and a journal paper recommended in the R-Letters of the IEEE Communications Society. In 2016 he received a Google Research Award and a certificate of dedication an leadership for the support of the activities of Centro-Norte Brasil Chapter of IEEE Signal Processing Society. Mintsu also served as reviewer of the Eusipco – European Signal Processing Conference in 2011 and 2012, IEEE International Conference on Image Processing (ICIP) from 2012 to 2019. He was also Technical Session Chair in Eusipco 2011 and ICIP 2015. He is also a regular reviewer of ICIP conferences and journals such as IEEE Transactions on Image Processing, IEEE Transactions on Circuits and Systems for Video Technology, Elsevier - Digital Signal Processing and Elsevier - Signal Processing - Image Communication.
  • Palestrante: Cesar Pozzer (UFSM)
  • Resumo: This lecture presents the development of a new file format optimized for large images for use in a virtual simulator for the Brazilian army. We discuss the existing solutions and show the advantages of creating a specific file format in order to improve performance for real-time applications. Our solution can handle raster images, height maps, normal maps, and splat maps.
  • Bio:
  • Palestrante: Marco Mattavelli - Webconference (EPFL-swiss)
  • Resumo: In its more than 30 years of activity ISO/IEC JTC 1/SC 29/WG 11 – also better known as Moving Picture Experts Group (MPEG) – has developed several generations of standards that have, first transformed the world of media from analog to digital, and then made possible the creation of the variety of interoperable applications exchanging and reproducing audiovisual content, that we commonly use every day. More recently MPEG has also addressed in his standardization efforts to non-media data such as genome sequencing data. In fact, the development and rapid progress of high-throughput sequencing (HTS) technologies has made possible to reduce the cost of the genome sequencing for an entire human genome to the range of US $1,000, with the expectations that within the next few years such cost could drop down to about US $100. The lack of appropriate representations and efficient compression technologies is a situation that is similar to what witnessed at the passage from analog to digital media and constitute a critical element limiting the potential of genomic data usage for scientific and public health purposes. Starting by these considerations MPEG and ISO TC 276/WG 5 have produced a new open standard to compress, store, transmit and process sequencing data: ISO/IEC 230092 (MPEG-G). The standard not only offers high levels of compression comparing with currently used formats, but it also provides, on the very same model of other MPEG digital media standards, completely new functionalities such as support for selective access in the compressed domain, support for data protection mechanisms, flexible storage and streaming capabilities. The talk after introducing the main principles of the MPEG work methodology, presents the essential technologies inspired by the signal processing of digital media that have been selected for the three parts constituting the MPEG-G standard: 1) file and transport format, 2) compression technologies and 3) metadata and APIs. Mentions will also be provided on how the MPEG media technologies employed by MPEG-G are capable of supporting new use cases, currently not supported by using legacy genomic formats, that include: read classification, selective access and selective data streaming in the compressed domain, compressed file concatenation, genomic studies aggregation, standard support and enforcement of privacy rules, selective encryption of sequencing data and metadata, annotation and linkage of genomic segments, and incremental update of sequencing data and metadata. Finally the talk will discuss some open problems and other standardization challenges in the processing of genomic data.
  • Bio: Marco Mattavelli started his research activity at the ”Philips Research Laboratories” of Eindhoven in 1988 on a channel and source coding for optical recording, electronic photography and signal processing of HDTV. In 1991 he joined the ”Swiss Federal Institute of Technology” (EPFL) where he got his Ph.D. in 1996. He has been a chairman of a group of MPEG ISO/IEC standardization committee. For his work, he received the ISO/IEC Award in 1997 and 2003. He is currently leading the “Multimedia Architectures Research Group” at EPFL. His current major research activities include methodologies for specification and modeling of complex systems, architectures for video coding, high-speed image acquisition, and video processing systems, applications of combinatorial optimization to signal processing. He is the author of more than 150 publications and has served as invited editor for several conferences and scientific journals and currently associated editor of IEEE Signal Processing letters and IEEE Transactions on Circuits and Systems for Video Technology.
  • Palestrante: Ramon Costi Fernandes (PUC-RS)
  • Resumo: The High-Efficiency Video Coding (HEVC) standard culminates years of advancements in video coding technologies. Compared to its predecessor H.264, HEVC is capable of achieving up to 50% coding efficiency improvements. This translates into half the encoded video size, while retaining the same visual quality. Among the many improvements of HEVC, its intra-frame predictor was greatly enhanced with additional prediction modes, capable of representing more block contents than its predecessors. Improving intra-frame prediction is an important aspect of the encoding flow, as a better prediction translates into reduced residual energy, consequently, improving coding efficiency. This work employs Curved Angular prediction modes to enhance the HEVC intra-frame predictor, offering better approximation of encoded block contents. All 33 angular modes in HEVC have received an offset-based displacement calculation to each predicted sample so that the resulting prediction block models image regions with curved textures. The Curved Angular prediction modes show promising results, including a negligible overhead in the syntax elements of the encoded bitstream (up to 2.96%) to transmit the curvature parameter along with the angular mode of a predicted block. Results demonstrate increased prediction accuracy with lower residual energy, achieving an average reduction in Bjøntegaard-Delta bitrate (BD-rate) of 3.43% for the HEVC test sequences.
  • Bio: Ramon Costi Fernandes is a Ph.D. Student at Pontifical Catholic University of Rio Grande do Sul (PUCRS, BR), MSc. in Computer Science (PUCRS, BR, 2016) and B.Sc. in Computer Science (PUCRS, BR, 2014). His current research interests include Video Coding, focusing on the H26X family of coding standards. Further research interests include MPSoC design with an emphasis in NoC communication protocols, having worked in projects at PUCRS's GSE and GAPH research groups on NoC design and wireless sensor networks.
  • Palestrante: Valter Roesler (UFRGS)
  • Resumo: The last years were marked by a great evolution in the area of videoconferencing. Many face-to-face meetings have been replaced by distance meetings, where participants use different types of systems for interaction, ranging from high-quality dedicated rooms (telepresence systems), simpler rooms (with one endpoint and two TVs, for example), and web conferencing systems (which use the web browser for communication), among others. A problem that arises is the need for people to interact through different systems, such as a hardware endpoint to talk to a web conferencing system or a VoIP phone. The purpose of this talk is to present standards and architectures that allow the integration of these technologies transparently to the user. Concepts such as MCU (Multipoint Control Unit), SFU (Selective Forwarding Unit), SIP signaling (Session Initiation Protocol), H.323, scalability and user experience will be addressed. Examples of use will be given in the areas of Education and Telemedicine. The focus will be related to PRAV's projects (Projects in Audio and Video).
  • Bio:
  • Palestrante: Guilherme Longoni
  • Resumo: In multipoint videocolaboration via MCU (Multipoint Control Unit), this element needs to perform many encodings and decodings simultaneously, being quite expensive in terms of processing. To solve this, we used dedicated hardwares with massive parallelism in DSPs (Digital Signal Processors). In recent years, with the evolution of cloud architectures and the processing capacity of generic computers, the possibility of using cloud systems to replace dedicated hardware systems has emerged as it is more economical and has greater flexibility for evolution. The purpose of this talk is to address a possible cloud software architecture that enables scalability of videoconferencing systems through the use of dynamic orchestrators and Virtual Machines.
  • Bio:

Update at 29/09

CALL FOR POSTERS

INSTRUCTIONS

Poster Sessions will be held in the 2019 School on Digital Processing of Visual Signals and Applications. Companies, Graduate, and Undergraduate students are invited to showcase their research on Image and Video Processing and Applications, including (but not limited to) the topics covered by the 3rd DPVSA School.


SUBMISSION FORMAT AND DEADLINES

Authors must submit an A4-sized (WxH = 210 mm x 297 mm) copy of the proposed poster for the review process through the EasyChair system, along with a 200-word abstract:

Link for EasyChair

Link for suggested poster template

Submissions are due April 30, 2019, 11:59 PM Brazilian Time (GMT-3).

Submissions are due May 2, 2019, 11:59 PM Brazilian Time (GMT-3).


POSTER PRESENTATION GUIDELINES

For each presentation, one poster board in portrait orientation will be reserved. Posters will be printed in A1 size (WxH = 594 mm x 841 mm) by the School Organizers.

LOCATION

Universidade Federal do Rio Grande do Sul - Instituto de Informática
Anfiteatro do Centro de Eventos, Prédio 43413 (antigo 67)
Campus do Vale, Porto Alegre - Rio Grande do Sul, Brazil.

REGISTRATION

(FREE REGISTRATION, PARTICIPATION, and POSTER PRINTING)

The 3rd The 3rd DPVSA School is supported and sponsored by IEEE Signal Processing Society and FAPERGS. The school is also supportted by the local chapter of IEEE Circuits and Systems Society.

Link to googleforms Registration