Tutorial proposals are invited for the SIBGRAPI 2010 – 23rd Conference on Graphics, Patterns, and Images to be held in Gramado, Rio Grande do Sul, Brazil from August 30th to September 3rd, 2010.
For inquiries concerning this call for tutorial proposals, please contact: sibgrapi.2010.tutorial@gmail.com.
Important Dates (Extended)
Submission deadline: May 12, 2010.
Notification of acceptance: June 1, 2010.
Survey paper and handsout due: July 15, 2010.
Types of Tutorials
The tutorials will be divided into regular tutorials, with 3-hour lectures, and hands-on lab tutorials, which may take from 3 hours (half day) to 6 hours (full day) to cover theory and practice in the labs. In both cases, the tutorials may be elementary, intermediary, or advanced and they may be presented in Portuguese or English, with handouts in any of these languages. We will provide technical support and computers for attendees to use during the hands-on tutorials.
- Elementary Tutorials: typically complement the basic undergraduate curriculum in Computer Science and help to attract students for graduate studies in Computer Vision (CV), Image Processing (IP), Computer Graphics (CG), and Pattern Recognition (PR). Instructors should not assume that the audience has basic knowledge in these areas.
- Intermediate Tutorials: are targeted to students, professionals and researchers in CV, IP, CG, and PR who wish to learn advanced techniques, which can be used in their work. Instructors may assume that attendees are familiar with basic notions of mathematics, numerical methods, programming, and CV, IP, CG, and PR.
- Advanced Tutorials: should focus the state-of-the-art research, recent developments, emerging topics and novel applications in CV, IP, CG, and PR.
Proposals Selection
The tutorials represent an opportunity for the participants of the SIBGRAPI 2010 to acquire technical knowledge in CV, IP, CG, PR and their applications. The proposals will be judged by: relevance for the conference; expected size of the audience; potential to attract participants to the conference; originality; and qualification of the instructors in the topic of the tutorial.
How To Submit a Proposal
Tutorial proposals should be submitted by email to the following address sibgrapi.2010.tutorial@gmail.com, with subject “SIBGRAPI 2010 – Tutorial Submission”. Please, send your proposal up to 7 pages in PDF format (preferred) or plain text only. See the LaTeX template in the SIBGRAPI 2010 website. The proposals should contain the following information:
First page:
Title;
Type (hands-on lab, regular or both);
Level (elementary, intermediate or advanced);
Abstract.
Next 6 pages:
Motivation;
Target audience;
Interest for the CV, IP, CG and PR community;
List of the topics to be presented, including: estimated duration, subtopics, relevant literature;
Presentation requirements: equipments and/or software;
Short biography of the instructors.
As attachment:
Curriculum Vitae of the instructors, with full name, address, e-mail, institution, education, publications, and experience in the topic of the tutorial;
Planned material to be distributed to the participants, such as slides, images, animations, books, etc. This material is optional at the submission, but its inclusion is strongly recommended as it allows a better assessment of the supporting material to be used during the course;
Incomplete or late submissions will not be considered. All submissions will be acknowledged.
It may be useful to look at the tutorials presented in previous years. Links to previous SIBGRAPI editions are available at the CEGRAPI website.
Handouts and other materials
Instructors of the selected tutorials must prepare a survey paper in English for the electronic version of the conference proceedings. We are negotiating with IEEE Computer Society CPS (Conference Publishing Services) to include the survey papers in the IEEE Xplore Digital Library. A survey paper is a paper with an introduction, a structured presentation of the topic, and conclusions indicating trends, applications, and directions for future work. See instructions for authors on how to format survey papers (LaTeX templates) in the SIBGRAPI 2010 website.
Handouts in English or Portuguese are also welcome to be distributed among the participants of the tutorial. Handout is a copy of the slides used during the presentation of the course at SIBGRAPI 2010. The authors should provide a file with 4 slides per page in landscape. The pages should be numbered and contain the footer “SIBGRAPI 2010 Tutorial”. We recommend the use of font Times 14 pt, or bigger, for better legibility of slides and notes.
Handouts should be sent in PDF format by e-mail to sibgrapi.2010.tutorial@gmail.com. The handouts will be available electronic version of the conference proceedings (CD-ROM).
Chairs
Luciano Silva (UFPR)
Manuel Oliveira (UFRGS)
Program Committee
David Menotti (UFOP)
Esteban Clua (UFF)
Marcelo Walter (UFRGS)
Nina Hirata (USP)
Ricardo Farias (UFRJ)
Siome Goldenstein (UNICAMP)
Tutorial 1
Graph-Based Image Segmentation
(Tutorial Type: Regular, Level: Advanced)
Alexandre Falcão, Thiago Spina, Paulo Miranda e Fábio Cappabianco*
Institute of Computing – UNICAMP, *Foundation Hermínio Ometto – Uniararas
Abstract: The analysis of digital scenes often requires the segmentation of connected components, named objects, in images and videos. The problem consists of defining the whereabouts of a desired object (recognition) and its spatial extension in the image (delineation). Humans can outperform computers in recognition, but the other way around is valid for delineation. The interpretation of an image as a graph, whose nodes are the image pixels and arcs are defined by some adjacency relation, provides different topologies to exploit optimum connectivity between pixels for effective delineation, taking also advantage of a solid mathematical background from graph theory. Recognition can be done by human operator or object model. We will first present a graph-based methodology, called Image Foresting Transform (IFT), which has been used to develop fast and accurate delineation approaches. Interactive IFT-based methods exploit a synergism between the computer for delineation and the user for recognition by simple marker selection. Some of these methods will be explained and illustrated for image editing of natural scenes using a common framework for arc-weight estimation. In the sequence, we will show the links between image segmentation based on the IFT algorithm and the max-flow/mincut algorithm. IFT-based segmentation methods can also use object models or some other prior knowledge about the problem to eliminate the user. We will then present a recent object model, called cloud system, which combines shape information and IFT-based delineation in a synergistic way. The cloud system model will be demonstrated for 3D automatic segmentation of brain structures. Finally, we will conclude the tutorial by presenting an automatic brain tissue classification method, which combines prior knowledge about the problem with IFTbased clustering.
Short biography of the instructors:
Dr. Alexandre X. Falcão is Associate Professor at the Institute of Computing, University of Campinas (UNICAMP), SP, Brazil. He received a B.Sc. in Electrical Engineering (1988) from the Federal University of Pernambuco (UFPE), PE, Brazil. He has worked in image processing and analysis since 1991. In 1993, he received a M.Sc. in Electrical Engineering from UNICAMP. During 1994-1996, he worked at the University of Pennsylvania, PA, USA, on interactive image segmentation for his doctorate. He got his doctorate in Electrical Engineering from UNICAMP in 1996. In 1997, he developed video quality assessing methods for TV Globo, RJ, Brazil. He has been Professor at the Institute of Computing, UNICAMP, since 1998 and has published over 100 works on topics involving image processing and analysis, volume visualization, content-based image retrieval, mathematical morphology, pattern recognition, and medical imaging applications.
Thiago V. Spina received a B.Sc. in Computer Science (2009) from the University of Campinas (UNICAMP), SP, Brazil. Since 2010 he has been working on natural images editing for his Master’s degree in Computer Science at the University of Campinas. His research involves primarily segmentation and matting of images and videos using graph-based tools. His areas of interest are image processing and analysis, computer vision, 3d reconstruction from videos, and content-based image/video retrieval.
Dr. Paulo A. V. Miranda received a B.Sc. in Computer Engineering (2003) and a M.Sc. in Computer Science (2006) from the University of Campinas (UNICAMP), SP, Brazil. During 2008-2009, he worked at the University of Pennsylvania, PA, USA, on image segmentation for his doctorate. He got his doctorate in Computer Science from the University of Campinas (UNICAMP) in 2009. He is currently a post-doctoral researcher working in the project called, BIA – Brain Image Analyzer, which is held in conjunction with professors of the Department of Neurology, Unicamp, under the supervision of Prof. Falcão. His research involves image segmentation and analysis, medical imaging applications, pattern recognition and content-based image retrieval.
Dr. Fabio Cappabianco received a B.Sc. in Computer Engineering (2003) and a M.Sc. in Computer Science (2006) from the University of Campinas (UNICAMP), SP, Brazil. During 2008-2009, he worked at the University of Pennsylvania, PA, USA, on image segmentation for his doctorate. He got his doctorate in Computer Science from the University of Campinas (UNICAMP) in 2010. He is currently lecturing at Foundation Hermínio Ometto. He is also a collaborator in the project called, BIA – Brain Image Analyzer, which is held in conjunction with
professors of the Department of Neurology, Unicamp, under the supervision of Prof. Falcão. His research involves medical image segmentation and analysis, pattern recognition and computer architectures for image processing.
Tutorial 2
Provenance-Enabled Data Exploration and Visualization with VisTrails Both
(Tutorial Type: Regular and hands-on lab, Level: Elementary to advanced)
Cláudio Silva, Juliana Freire, Emanuele Santos and Erik Anderson
Scientific Computing and Imaging Institute & School of Computing, University of Utah Salt Lake City, USA
Abstract: Scientists are now faced with an incredible volume of data to analyze. To explore and understand the data, they need to assemble complex workflows (pipelines) to manipulate the data and create insightful visual representations. Provenance is essential in this process. The provenance of a digital artifact contains information about the process and data used to derive the artifact. This information is essential for preserving the data, for determining the data’s quality and authorship, for both reproducing and validating results – all-important elements of the scientific process. Provenance has shown to be particularly useful for enabling comparative visualization and data analysis. This tutorial will inform computational and visualization scientists, users and developers about different approaches to provenance and the trade-offs among them. Using the VisTrails project as a basis, we will cover different approaches to acquiring and reusing provenance, including techniques that attendees can use for provenance-enabling their own tools. The tutorial will also discuss uses of provenance that go beyond the ability to reproduce and share results.
Short biography of the instructors:
Cláudio Silva received the BS degree in mathematics from the Federal University of Ceara, Brazil, in 1990, and the PhD degree in computer science from the State University of New York at Stony Brook in 1996. He is a Professor of computer science and a faculty member of the Scientific Computing and Imaging (SCI) Institute at the University of Utah. Before joining Utah in 2003, he worked in industry (IBM and AT&T), government (Sandia and LLNL), and academia (Stony Brook and OGI). He has co-authored over 140 technical papers and 7 patents, primarily in visualization, geometric computing, and related areas. He has served as papers co-chair for IEEE Visualization conference in 2005 and 2006. He received IBM Faculty Awards in 2005, 2006 and 2007. Dr. Silva a co-creator of VisTrails (www.vistrails.org), an open-source scientific workflow and provenance management system.
Juliana Freire is an Associate Professor at the School of Computing, University of Utah. Before joining the
University of Utah, she was member of technical staff at the Database Systems Research Department at Bell Laboratories (Lucent Technologies) and an Assistant Professor at OGI/OHSU. She has co-authored over 100 technical publications and holds 4 U.S. patents. She has chaired or co-chaired several workshops and conference, and she has participated as a program committee member in over 60 events. Dr. Freire’s research has focused on extending traditional database technology and developing techniques to address new data management problems introduced by the Web and scientific applications. Within scientific data management, she is best known for her work on provenance management and for being a co-creator of the open-source VisTrails system. In 2008, Dr. Freire received an NSF CAREER Award and an IBM Faculty Award. Her research has been funded by grants from the National Science Foundation, Department of Energy and the University of Utah.
Emanuele Santos is a research assistant and graduate student at the University of Utah. She received M.S. and B.S. degrees in computer science from the Federal University of Ceara in Brazil. Between 2002 and 2005, she lectured college-level computer science courses in Brazil. Her research interests include scientific data management, comparative visualization and provenance-rich publications. She is a Fulbright scholar and one of the main developers of VisTrails (www.vistrails.org), an open-source scientific workflow and provenance management system.
Erik W. Anderson is a research assistant and graduate student at the University of Utah. He received a B.S. Degree in computer science and a B.S. degree in electrical and computer engineering from Northeastern University. His research interests include scientific visualization, signal processing, computer graphics, and multimodal visualization. He is regularly holding seminars on neuroscience analysis and visualization at the SCI Institute. He is also one of the main developers of VisTrails (www.vistrails.org), an open-source scientific workflow and provenance management system.
Tutorial 3
Designing multi-projector VR systems: from bits to bolts
(Tutorial Type: Regular, Level: Elementary)
Alberto Raposo, Felipe Carvalho, Luciano Pereira Soares, Joaquim A. Jorge*, Miguel Sales Dias**, Bruno R. de Araújo*
Tecgraf – Computer Graphics Technology Group, Pontifical Catholic University of Rio de Janeiro, *Instituto Superior Técnico, **MLDC – Microsoft Language Development Center
Abstract: This tutorial will present how to design, construct and manage immersive multiprojection environments, covering from projection technologies to computer hardware and software integration. Topics as tracking, multimodal interactions and audio are going to be explored. At the end, we are going to present important design decisions from real cases.
Short biography of the instructors:
Alberto Raposo holds PhD and MSc degrees in Electrical Engineering from University of Campinas, Brazil. He is currently a professor at the Computer Science Department at the Pontifical Catholic University of Rio de Janeiro and coordinates the Virtual Reality group at the Computer Graphics Technology Group (Tecgraf) in the same university. His research interests include 3D interaction techniques, real-time visualization of massive models, augmented reality, and collaborative environments. He has co-authored more than 80 refereed publications.
Felipe Carvalho holds PhD and MSc degrees in Computer Science from Pontifical Catholic University of Rio de Janeiro, Brazil. He is currently a researcher at the Computer Graphics Technology Group (Tecgraf) working in several projects at Petrobras. His research interests include 3D interaction techniques, virtual and augmented reality, and development of nonconventional devices.
Tutorial 4
Um sketch sobre Modelagem e Interfaces baseadas em Sketches
(Tutorial Type: Regular, Level: Intermediário)
Leandro Moraes Valle Cruz, Luiz Velho
IMPA – Instituto de Matemática Pura e Aplicada, VISGRAF – Laboratório de Computação Gráfica e Visão Computacional
Resumo: Modelagem e Interfaces baseadas em Sketches (SBIM) tem sido uma linha de pesquisa em modelagem geométrica, para criar aplicações mais acessíveis e naturais aos usuários. Pretendemos discutir esse tema, apresentando técnicas e resultados de alguns dos principais trabalhos da área. Apresentaremos uma breve conceituação teórica da área e abordaremos as principais decisões de um projeto de aplicação SBIM. Pretendemos discutir os aspectos de modelagem, como: representação de um objeto, técnicas de
criação e edição utilizando sketches. Discutiremos possibilidades de modelagem surgidas ao usar diferentes dispositivos para aquisição dos dados (como Tablets e Interfaces Multitoques). Apresentaremos alguns aspectos relacionados à aplicações SBIM, como metas para construir uma boa interface de modelagem possíveis objetos de interface e controle da aplicação usando gestos. Ao fim deste tutorial, pretendemos que os participantes tenham uma visão geral da área, conhecendo os principais trabalhos, assim como possibilidades, desafios e tendências.
Short biography of the instructors:
Leandro Moraes Valle Cruz é estudante de Mestrado no IMPA desde março de 2009; recebeu o título de Licenciado em Matemática, na Universidade Estadual do Norte Fluminense, em 2006; e o título de Bacharel em Ciências da Computação, na Universidade Cândido Mendes, em 2009. Durante quatro anos lecionou matemática no Centro de Ensino a Distância do Estado do Rio de Janeiro, e durante um ano trabalhou no Núcleo de Sistemas de Informação, do Instituto Federal Fluminense, trabalhando com multimidias em aplicações Webs. Durante sua graduação em matemática, ao longo de dois anos, participou de uma iniciação científica em modelagem geométrica, entitulado objetos gráficos n-dimensionais, no Laboratório de Ciências Matemáticas. Nesse período publicou trabalhos sobre modelagem geométrica e NURBS no WUW-SIBGRAPI 2005; poster no 25o Colóquio de Matemática, em 2005; e monografia na III Jornadas de Iniciação Científica no IMPA (2006). No inicio de 2009 retomou as pesquisas em modelagem, resultando em um trabalho sobre modelagem usando sketches em interfaces naturais, no WUW-SIBGRAPI 2009. Atualmente tem dedicado sua pesquisa a modelagem e interfaces baseadas em sketches, como tema de sua dissertação de mestrado. Como consequência deste trabalho, publicou um relatório técnico em março de 2010 com uma análise dos principais temas da área (tópicos abordados neste tutorial).
Luiz Velho é Pesquisador Titular no IMPA. Ele recebeu o título de Bacharel em Design Industrial no ESDI – Universidade do Rio de Janeiro, em 1979, de Mestre em Computação Gráfica no MIT (Massachusetts Institute of Technology), em 1985; e PhD em Ciências da Computação, em 1994, na Universidade de Toronto. Possui experiência em Computação Gráfica em áreas como Modelagem, Rendering, Processamento de Imagem e Animação. Durante o ano de 1982, esteve como pesquisador visitante no National Film Board of Canada. De 1985 até 1987, foi engenheiro de sistemas na Fantastic Animation Machine, em Nova York, onde desenvolveu
um sistema de visualização 3D. De 1987 até 1991, foi o principal engenheiro da Rede Globo de Televisão, no Brasil, onde criou sistemas de efeitos especiais e simulações visuais. Em 1994, foi professor visitante do Courant Institute of Mathematical Sciences, na Universidade de Nova York. Ele é autor de vários livros em computação gráfica, e publicou vários artigos na área. Apresentou cursos no SIGGRAPH: Modeling in Graphics, em 1993; Warping and Morphing of Graphical Objects, em 1994 e 1997. Foi co-autor dos cursos de Fourier to Wavelets, no SIGGRAPH de 1998 e no de 1999. Seus atuais interesses de pesquisa incluem fundamentação teórica em computação gráfica, métodos baseados em física, wavelets, modelagem com objetos implícitos, e métodos baseados em imagens.
Tutorial 5
Visual Multidimensional Geometry and its Applications
(Tutorial Type: Regular, Level: Elementary)
Alfred Inselberg
School of Mathematical Sciences, Tel Aviv University Tel Aviv, Israel
Abstract: With parallel coordinates (abbr. k-cs) the perceptual barrier imposed by our 3-dimensional habitation is breached enabling the visualization of multidimensional problems. In this tutorial a panorama of highlights from the foundations to the most recent results, interlaced with applications, is intuitively developed. By learning to untangle patterns from k-coords displays (Fig. 1, 2) a powerful knowledge discovery process has evolved. It is illustrated on real datasets together with guidelines for exploration and good query design. Realizing that this approach is intrinsically limited (see Fig. 3 – left) leads to a deeper geometrical insight, the recognition of M-dimensional objects recursively from their (M−1)-dimensional subsets (Fig. 3 – right). It emerges that any linear N-dimensional relation is represented by (N −1) indexed points. For example in 3-D, two points with two indices represent lines and two points with three indices represent planes. In turn, powerful geometrical algorithms (intersections, containment, proximities) and applications including classification Fig. 4 emerge. A smooth surface in 3-D is the envelope of its tangent planes each of which is represented by 2 points with 3 indices Fig. 6. As a result, a surface in 3-D is represented by two planar regions and in N-dimensions by (N −1) planar regions. This is equivalent to representing a surface by its normal vectors. Developable surfaces are represented by curves Fig. 7 revealing the surfaces’ characteristics. Convex surfaces in any dimension are recognized by the hyperbola-like (i.e. having two asymptotes) regions from just one orientation Fig. 5 – right, Fig. 8, Fig. 10 – right. Non-orientable surfaces (i.e. like the Möbius strip) yield stunning patterns Fig. 9 unlocking new geometrical insights. Non-convexities like folds, bumps, coiling, dimples and more are no longer hidden Fig. 10 – left and are detected from just one orientation. Evidently this representation is preferable for some applications even in 3-D. Many of these results were first discovered visually and then proved mathematically; in the true spirit of Geometry. The patterns generalize to N-dimensions and persist in the presence of errors and that’s good news for the applications. The parallel coordinates methodology is used in collision avoidance and conflict resolution algorithms in air traffic control (3 USA patents), computer vision (USA patent), data mining (USA patent) for data exploration and automatic classification, optimization, process control and elsewhere. Parallel coordinates are included in many commercial and free software and are taught in numerous courses worldwide.
Short biography of the instructor:
Alfred Inselberg (AI) received a Ph. D. in Mathematics and Physics from the University of Illinois (Champaign-Urbana) in 1965 and was Research Professor there until 1966. He held research positions at IBM, where he developed a Mathematical Model of Ear (TIME Nov. 74), concurrently having joint appointments at UCLA, USC, Technion and Ben Gurion University. Since 1995 he is at the School of Mathematical Sciences at Tel Aviv University. AI was elected Senior Fellow at the San Diego Supercomputing Center in 1996. He invented and developed the multidimensional system of Parallel Coordinates for which he received numerous awards and patents (on Air Traffic Control, Collision-Avoidance, Computer Vision, Data Mining). His textbook on Parallel Coordinates: VISUAL Multidimensional Geometry and its Applications” was published by Springer in (October) 2009 and contains a chapter on Data Mining. Among others, this book won praises from Stephen Hawking.
Tutorial 6
Fundamentals of visual data mining, information retrieval, extraction, and analysis
(Tutorial Type: Regular, Level: Elementary)
Haim Levkowitz
Institute for Visualization and Perception Research And Graphics Research Laboratory, Department of Computer Science University of Massachusetts Lowell, USA
Abstract: Everyone knows how to “Google”; some people even know that Google is a “search engine”; but very few know that “search engines” are “information retrieval” engines. As the amount of information grows so rapidly, finding the right information, and analyzing it has become more and more challenging. Search technology has — probably — been the fastest- and steepest-growth segment, ever. And when you find information, that’s just about the beginning of the next challenge: extracting meaning and knowledge out of it.
Today, most of your search queries are formulated by (usually very few) key words — a very difficult way to express the semantic of your search needs. And the results appear as (very long) lists of text. To find what you’ve been looking for — or to find out that it is not there — you need to scan through page-after- page-after-page of results, not a very efficient or effective process. Further, if you are trying to find non-textual information (images, sounds), you have very limited resources. Can we do better than that? Yes. How? By replacing the sequential search through results’ text with perceptually-stronger visual mechanisms, often referred to as visual text (or data) mining, and by focusing on better interaction.
Short biography of the instructor:
Haim Levkowitz is an associate professor of computer science and co-director of the Institute for Visualization and Perception Research at the University of Massachusetts Lowell, in Lowell, MA, USA. He is a world-renowned authority on visualization, perception, color, and their application in data mining and information retrieval. He is the author of “Color Theory and Modeling for Computer Graphics, Visualization, and Multimedia Applications” (Springer 1997) and co-editor of “Perceptual Issues in Visualization” (Springer 1995), as well as many papers in these subjects. He has more than 35 years experience in teaching and lecturing, and has taught many tutorials and short courses, in addition to regular academic courses.
Tutorial 7
Development of Computer Graphics and Digital Image Processing Applications on the iPhone
(Tutorial Type: Regular, Level: Elementary)
Luciano Godoy Fagundes and Rafael Santos (INPE)
Abstract: The iPhone is one of the most powerful, complete and versatile portable phones on the market. There are presently more than 150.000 applications available for the iPhone, and its users had downloaded more than three billion applications so far. The iPhone have several capabilities that makes it an interesting platform for the development of applications that use image processing, computer graphics and/or pattern recognition algorithms: it is stable, popular, powerful, flexible and of course portable. What can a developer expect from the platform? What, in practical terms, can be done to implement those types of algorithms, and at which price? This short course presents concepts and practical issues on the developing of image processing, computer graphics and pattern recognition applications on the iPhone. Code snippets will be provided, and issues such as memory management, capabilities and limitations will be discussed.
Short biography of the instructors:
Luciano Godoy Fagundes is currently working on his PhD degree at INPE (Instituto Nacional de Pesquisas Espaciais) in Brazil. He obtained his Masters at ITA (Instituto Técnico de Aeronáutica) and
his Master in Business Administration specialization at FGV (Fundação Getúlio Vargas). He has been
part of Avaya Inc/ Lucent Technologies for the last 12 years where he worked as Professional Services Consultant, become a System Engineer of Bell Labs and has been a new Solution engineer, bringing new technologies to Caribbean and Latin America for the last 5 years. He has founded a start-up company Babs2Go in 2009 in order to start distributing iPhone Apps on the App Store. His Lattes CV can be accessed here.
Rafael Santos obtained his Masters and PhD in applied artificial intelligence at the Kyushu Institute of Technology in Fukuoka, Japan; respectively in 1995 and 1998, with a grant from the Japanese Ministry of Education. Presently he is a technologist at the National Institute for Space Research (INPE), working with research and development in image processing, data mining and intelligent web systems with applications on remote sensing, sensor networks and education/outreach. He is the author of the book (in Portuguese) “Introdução à Programação Orientada a Objetos usando Java”, part of the Campus/SBC textbook collection, and author of several tutorials, short courses and talks on data mining, visualization, image processing and programming, including the on-line book Java Image Processing Cookbook. His Lattes CV can be accessed here.