Fall 2011

Student Seminar

September 6, 2011

In the traditional “twenty questions” game, the task at hand is to determine a fact or target location, by sequentially asking a knowledgeable oracle questions. This problem has been extensively studied in the past, and results on optimal questioning strategies are well understood. In this thesis however, we consider the case where the answers from the oracle are corrupted with noise from a known model. With this problem occurring both in nature and in a number of computer vision applications (i.e. object detection and localization, tracking, image registration) the goal then is to determine some policy, or sequence of questions, that reduces the uncertainty on the target location as much as possible.

We begin by presenting a Bayesian formulation of a simple and idealized parameter estimation problem. Starting with a prior distribution on the parameter, principles in dynamic programming and information theory can be used to characterize an optimal policy when minimizing the expected entropy of the distribution of the parameter. We then show the existence of a simple greedy policy that is globally optimal. Given these results, we describe a series of stochastic optimization algorithms that embody the noisy twenty questions game paradigm in the context of computer vision. These algorithms are referred to as: Active Testing. We describe the benefit of using this technique in two real-world applications: (i) face detection and localization, and (ii) tool tracking during retinal microsurgery. In the first application, we show that substantial computational gains over existing approaches are achieved when localizing faces in images. In the second, we tackle a much more challenging real-world application where one must find the position and orientation of a surgical tool during surgery. Our approach provides a new and innovative way to perform fast and reliable tool tracking in cases when the tool moves in and out of the field of view often.

September 15, 2011

A new research paradigm in healthcare applications investigates how to improve a patient’s quality of care with wearable embedded systems that continuously monitor a patient’s vital signs as he/she ubiquitously moves about the environment. While previous medical examinations could only extract localized symptoms through snap shots, now continuous monitoring can discretely analyze how a patient’s lifestyle may affect his/her physiological conditions and whether additional symptoms occur under various stimuli.

My research used participatory design methods to develop an electronic triage system that replaced the paper triage system and changed how emergency personnel interact, collect, and process data at mass casualty incidents. My research investigated the design of an infrastructure that provided efficient resource allocation by continuously monitoring the vital signs and locations of patients. This real world deployment uncovered numerous research challenges that arose from the complex interactions of the embedded systems with the dynamic environment that they were deployed in. I address the challenge of body attenuation by constructing a model of attenuation in body sensor networks from experimental data. I also use data driven methods to address the challenge of limited storage capacity in mobile embedded systems during network partitions. An optimization algorithm models inter-arrival time, intra-arrival time, and body attenuation to achieve efficiency in storage capacity. My approach mitigates data loss and provides continuous data collection through a combination of continuous optimization, statistical variance, and data driven modeling techniques.

A data driven approach that uses quantitative information from experimental deployments is necessary when building realistic systems for medical applications where failure can result in the loss of a life. My research leverages mobile health systems to improve health outcomes by defining risk factors for diseases within communities, improving the ability to track and diagnose diseases, and identifying patterns for behavior analysis and modification. My research contributes to the foundation of computer integrated medicine research by creating a class of systems and a collection of techniques for informatics-based preventive interventions.

Speaker Biography: Dr. Tammara Massey holds a joint appointment as an Assistant Research Professor in the Computer Science Department at Johns Hopkins University and a Systems Engineer at Johns Hopkins University Applied Physics Laboratory. She is also a member of the Johns Hopkins Systems Institute. Dr. Massey earned her Masters in Computer Science from the Georgia Institute of Technology and her PhD from the University of California, Los Angeles. She is a subject matter expert in computer integrated medicine, health informatics, preventive interventions, and sensor enabled embedded systems. Her research explores a data driven approach to developing reconfiguration techniques in embedded systems for medical applications, explores modeling of attenuation in body sensor networks, and leverages statistical power optimization techniques to detect the physical tampering of portable devices. Tammara has published over 20 journal and conference papers, co-authored 2 book chapters, and is a named inventor on a provisional patent.

September 22, 2011

Recently, robots have gained capabilities in both sensing and actuation, which enable operation in the proximity of humans. Even direct physical interaction has become possible without suffering from decrease in speed and payload. However, it is clear that these human-friendly robots will look very different from today’s industrial ones. Rich sensory information, lightweight design, and soft-robotics features are required to reach the expected performance and safety during interaction with humans or in unknown environments. In this talk I will give an overview about my research topics at DLR that aim at solving these long-term challenges. The first part of my talk deals with the realization of sensor based co-workers/servants that bring robots closer to humans and enable close cooperation with them. I will describe our design methodologies, biomechanical safety analysis, exteroceptive sensing methods, control and motion algorithms, the developed HRI schemes, and several applications that benefit form the achieved advances. The second part of the talk covers variable impedance actuation that implements soft-robotics features mainly in hardware.

Based on the design and control ideas of actively controlled compliant systems we intend to outperform this mature technology with new variable stiffness systems. I will present the overall design ideas, the recently built hand-arm system, and novel control concepts that aim at exploiting the natural dynamics of these systems.

Speaker Biography: Sami Haddadin received his Dipl.-Ing. (German equivalent to M.Sc.) degree in Electrical Engineering in 2005 and the M.Sc. in Computer Science in 2009 from Technical University of Munich (TUM). He holds an Honours degree in Technology Management from Technical University of Munich and the Ludwig Maximilian University Munich (LMU). He obtains his PhD from RWTH Aachen. Sami Haddadin is with the Robotics and Mechatronics Center of the German Aerospace Center (DLR), where he heads the “Human-Robot Interaction” group. Since 2009 he also lectures advanced robotics at TUM. His main research topics are physical Human-Robot Interaction, nonlinear robot control, safety and dependability in robotics, optimal control and learning, real-time motion planning, and reactive planning. Among other things, he received the Best Application Paper Award from IROS 2008, the Best Service Robotics Paper Award from ICRA 2009, the euRobotics Technology Transfer Award 2011, and was a finalist for the Robotdalen Science Award 2009.

Student Seminar

October 4, 2011

Continued improvements in physical instruments and data pipelines has lead to an exponential growth in data size in the Sciences. In Astronomy for example, the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) produces tens of terabytes daily. In turn, the Scientific community distributes data geographically at a global scale to facilitate the accumulation of data at multiple, autonomous sources and relies on application-driven optimizations to manage and share repositories at a petabyte scale. More importantly, workloads that comb through vast amounts of data are gaining importance in the Sciences. These workloads consist of “needle in a haystack” queries that are long running and data intensive, requiring non-indexed scans of multi-terabyte tables, so that query throughput limits performance. Queries also join data from geographically distributed sources such that transmitting data produce tremendous strains on the network. Thus, query scheduling needs to be re-examined to overcome scalability barriers and enable a large community of users to explore simultaneously the resulting, massive amounts of Scientific data. Toward this goal, we study algorithms that incorporate network structure in scheduling for distributed join queries. Resulting schedules exploit excess network capacity and minimize the utilization of network resources over multiple queries. We also put forth a data-driven, batch processing paradigm that improves throughput in highly contentious databases by identifying partial overlap in the data accessed by incoming queries. Instrumenting our algorithms in Astronomy and Turbulence databases provide significant reductions in both network and I/O costs. Moreover, similar resource scheduling problems exists in Cloud computing, and we extend our algorithms to large scale data processing systems such as Hadoop, Cassandra, and BigTable.

October 13, 2011

Geo-replicated, distributed data stores that support complex online applications, such as social networks, must provide an always-on experience where operations always complete with low latency. Today’s systems often sacrifice strong consistency to achieve these goals, exposing inconsistencies to their clients and necessitating complex application logic. This talk will present the design and implementation of COPS, a key-value store that delivers causal consistency across the wide-area. A key contribution of COPS is its scalability, which can enforce causal dependencies between keys stored across an entire cluster, rather than a single server like in previous systems. The talk will also present COPS-GT, which adds get transactions that enable a client to obtain a consistent view of multiple keys without locking or blocking

Speaker Biography: Wyatt Lloyd is a fifth year Ph.D. student advised by Michael J. Freedman in the Department of Computer Science at Princeton University. His research investigates broad topics in distributed systems such as masking failure and, more recently, enhancing the consistency of scalable wide-area storage. Wyatt received a B.S. in Computer Science from Penn State and a M.A. in Computer Science from Princeton. He has interned with Boeing and Intel Labs.

October 20, 2011

While most CS/engineering students are learning the basic technologies which underly cloud computing, few understand the fundamental economics. This talk will first focus on seven business models which underly most of the business and consumer technology industry. These fundamental economic differences and the accompanying technology form the basis of this next generation of cloud computing services. The second part of the talk discusses a five layer cloud computing stack with many case studies and concludes with some challenges for the JHU students.

Speaker Biography: Timothy Chou has been a leader in bringing enterprise applications to the cloud since 1999, when he became the President of Oracle On Demand. Timothy has over twenty years of experience in the technology business. Since leaving Oracle in 2005 he returned to Stanford University and started the first class on software as a service and cloud computing. Subsequently in 2009 he started the first class on cloud computing at Tsinghua University in Beijing, China. For ten years Dr. Chou has been a visible pioneer in evangelizing this major shift in the software business. He has appeared in various publications including Forbes, Business Week, The Economist, and New York Times as well as on CNBC and NPR. In the past year he has been in demand as a public speaker on the subject of cloud computing. He has given keynote addresses both to global CxO audiences as well as to sales organizations of some of the largest technology companies. He recently completed the 2nd edition of the book Cloud: Seven Clear Business Models. The book is also being translated into Chinese and will be available in the Fall of 2011. Not content to merely teach he has invested time and treasure in several new cloud computing companies. These companies range from a next generation application cloud service to an innovative approach to creating a new channel for cloud computing, to an iPad application to power enterprise sales. Timothy holds a B.S. in Electrical Engineering from North Carolina State University and a Masters and Ph.D in Electrical Engineering from the University of Illinois. He served as a member of the board of directors of Embarcadero Technologies (NASDAQ:EMBT) from 2000 until the purchase of the company in 2007. In 2007 he joined the board of directors at Blackbaud (NASDAQ: BLKB). He first drove a Mercedes-Benz in 1988 from Munich to Lisbon and several years ago acquired a mint condition 2002 CLK 320, which he considers one of the best looking cars Mercedes ever built.

Distinguished Lecturer

October 25, 2011

Did you hear the one about how many batteries it takes to turn on a Turing machine? None! It’s outside the model of computation. Yet it’s extremely difficult to store information or compute without power. Perpetual computing is hard. As embedded systems continue to shrink in size and energy consumption, the battery becomes the greatest bottleneck. I will describe recent research results on batteryless, RFID-scale computers: the UMass Moo platform, stochastic storage on Half-Wits (USENIX FAST), and energy-aware checkpoints with Mementos (ACM ASPLOS). The UMass Moo is an embedded system based on the Intel WISP. The mixed signal system combines hardware and software to behave like an RFID tag with non-volatile memory, sensing, radio communication, and von Neumann-style computation. This batteryless device operates on RF energy harvesting and uses a small capacitor as a voltage supply. The capacitor stores 100 million times less energy than a typical AA battery. This lack of energy leads to two research challenges: how to reliably store data in non-volatile memory at low cost and low voltage, and how to compute when power losses interrupt programs every few hundred milliseconds. The Half-Wits work analyzes the stochastic behavior of writing to embedded flash memory at voltages lower than recommended by a microcontroller’s specifications to reduce energy consumption. Flash memory integrated within a microcontroller typically requires the entire chip to operate on common supply voltage almost double what the CPU portion requires. Our approach tolerates a lower supply voltage so that the CPU may operate in a more energy efficient manner. Our software-only coding algorithms enable reliable storage at low voltages on unmodified hardware by exploiting the electrically cumulative nature of half-written data in write-once bits. Measurements show that our software approach reduces energy consumption by up to 50%. This work is joint with Erik Learned-Miller (UMass Amherst) and Andrew Jiang (Texas A&M). Mementos helps programs run to completion despite interruptions of power. Transiently powered computers risk the frequent, complete loss of volatile memory. Thus, Mementos automatically instruments programs with energy-aware checkpoints to protect RAM and registers. Mementos consists of a suite of compile- and run-time tools that help to transform long-running programs into interruptible computations. The contributions include a study of the run-time environment for programs on RFID-scale devices, an energy-aware state checkpointing system for MSP430 family of microcontrollers, and a trace-driven simulator of transiently powered RFID-scale devices. This work is joint with Jacob Sorber (Dartmouth College).

Speaker Biography: Kevin Fu is an Associate Professor of Computer Science and Adjunct Associate Professor of Electrical & Computer Engineering at the University of Massachusetts Amherst. Prof. Fu makes embedded computer systems smarter: better security and safety, reduced energy consumption, faster performance. His most cited contributions pertain to computational RFIDs, trustworthy medical devices, secure storage, and web authentication. His research has been featured in the New York Times, Wall Street Journal, NPR, Boston Globe, Washington Post, LA Times, IEEE Spectrum, Consumer Reports, and several others. Prof. Fu was named MIT Technology Review TR35 Innovator of the Year. He received a Sloan Research Fellowship, NSF CAREER award, and best paper awards from USENIX Security, IEEE Symp. of Security and Privacy, and ACM SIGCOMM. Prof. Fu is an incoming member of the NIST Information Security and Privacy Advisory Board and a visiting scientist at the Food and Drug Administration (FDA). Prof. Fu directs the UMass Amherst Security and Privacy Research lab (spqr.cs.umass.edu), the Open Medical Device Research Library (omdrl.org), and the RFID Consortium on Security and Privacy (RFID-CUSP.org). He is co-director of the Medical Device Security Center (secure-medicine.org). Prof. Fu is a frequent visiting faculty member at Microsoft Research, the MIT Computer Science and Artificial Intelligence Lab, and the Beth Israel Deaconess Medical Center of the Harvard Medical School. Prof. Fu received his Ph.D. in Electrical Engineering and Computer Science from MIT.

November 1, 2011

I will describe a notion of Information for the purpose of decision and control tasks, as opposed to data transmission and storage tasks implicit in Communication Theory. It is rooted in ideas of J. J. Gibson, and is specific to classes of tasks and nuisance factors affecting the data formation process. When such nuisances involve scaling and occlusion phenomena, as in most imaging modalities, the “Information Gap” between the maximal invariants and the minimal sufficient statistics can only be closed by exercising control on the sensing process. Thus, sensing, control and information are inextricably tied. This has consequences in the analysis and design of active sensing systems. I will show applications in vision-based control, navigation, 3-D reconstruction and rendering, as well as detection, localization, recognition and categorization of objects and scenes in live video.

Speaker Biography: Stefano Soatto is the founder and director of the UCLA Vision Lab (vision.ucla.edu). He received his Ph.D. in Control and Dynamical Systems from the California Institute of Technology in 1996; he joined UCLA in 2000 after being Assistant and then Associate Professor of Electrical and Biomedical Engineering at Washington University, Research Associate in Applied Sciences at Harvard University, and Assistant Professor in Mathematics and Computer Science at the University of Udine, Italy. He received his D.Ing. degree (highest honors) from the University of Padova- Italy in 1992. Dr. Soatto is the recipient of the David Marr Prize (with Y. Ma, J. Kosecka and S. Sastry) for work on Euclidean reconstruction and reprojection up to subgroups. He also received the Siemens Prize with the Outstanding Paper Award from the IEEE Computer Society for his work on optimal structure from motion (with R. Brockett). He received the National Science Foundation Career Award and the Okawa Foundation Grant. He is a Member of the Editorial Board of the International Journal of Computer Vision (IJCV), the International Journal of Mathematical Imaging and Vision (JMIV) and Foundations and Trends in Computer Graphics and Vision.

November 3, 2011

Legal requirements and increase in public awareness due to egregious breaches of individual privacy have made data privacy an important field of research. Recent research, culminating in the development of a powerful notion called differential privacy, have transformed this field from a black art into a rigorous mathematical discipline.

In this talk, we critically analyze the trade-off between accuracy and privacy in the context of social advertising – recommending people, products or services to users based on their social neighborhood. We present a theoretical upper bound on the accuracy of performing recommendations that are solely based on a user’s social network, for a given level of (differential) privacy of sensitive links in the social graph. We also show using real networks that good private social recommendations are feasible only for a small subset of the users in the social network or for a lenient setting of privacy parameters.

I will conclude the talk with some exciting new research about a no free lunch theorem, which argues that privacy tools (including differential privacy) cannot simultaneously guarantee utility as well as privacy for all types of data.

Speaker Biography: Ashwin Machanavajjhala is a Senior Research Scientist in the Knowledge Management group at Yahoo! Research. His primary research interests lie in data privacy with a specific focus on formally reasoning about privacy under probabilistic adversary models. He is also interested in creating and curating structure knowledge bases from unstructured, noisy and time-varying web data using statistical methods. Ashwin graduated with a Ph.D. from the Department of Computer Science, Cornell University. His thesis work on defining and enforcing privacy was awarded the 2008 ACM SIGMOD Jim Gray Dissertation Award Honorable Mention. He has also received an M.S. from Cornell University and a B.Tech in Computer Science and Engineering from the Indian Institute of Technology, Madras.

November 10, 2011

Cardiac surgical interventions often involve reconstructing complex structures on an arrested and flaccid heart under cardiopulmonary bypass. The relatively recent introduction of 4D (3D volumetric + time) ultrasound in pre- and intra-operative settings has opened the way to the development of tools to extract patient-specific information to help cardiac surgeons perform pre-operative planning and to predict the outcome of complex surgical interventions. In this talk, I describe techniques developed in a collaborative project between APL, the JHU SOM, and JHU BME dept., aimed at combining machine vision and modeling/simulation, to help surgeons tailor mitral valve surgical interventions (valvuloplasty) to specific patient conditions. At the end of the presentation, I will also review other recent collaboration projects in medical image analysis between the JHU APL and the JHU SOM.

Speaker Biography: Philippe Burlina is with the Johns Hopkins University Applied Physics Laboratory and the Department of Computer Science. He holds an M.S. and a Ph.D. in Electrical Engineering from the University of Maryland at College Park and a Diplome d’Ingenieur in Computer Science from the Universite de Technologie de Compiegne. His research interests span several areas of machine vision, hyperspectral imaging, medical image analysis, and Bayesian filtering.

November 17, 2011

Next-generation sequencing technology allows us to peer inside the cell in exquisite detailed, revealing new insights into biology, evolution, and disease that would have been impossibly expensive to find just a few years ago. The shorter read lengths and enormous volumes of data produced by NGS experiments present many computational challenges that my group is working to address. This talk will discuss three problems: (1) mapping next-gen sequences onto the human genome and other large genomes at very high speed; (2) spliced alignment of RNA transcripts to the genome, including fusion transcripts; and (3) transcript assembly and quantitation from RNA-Seq experiments including the discovery of alternative splice variants. We are developing new computational algorithms to solve each of these problems. For alignment of short reads to a reference genome, our Bowtie program, using the Burrows-Wheeler transform, aligns short reads many times faster than competing systems, with very modest memory requirements [1]. To align RNA-Seq reads (transcripts) to a genome, we have developed a suite of tools including TopHat and Cufflinks [2,3], which can align across splice junctions and reconstruct full-length transcripts from short reads.

This talk will describe joint work with current and former members of my group including Ben Langmead, Cole Trapnell, Mike Schatz, Daehwan Kim, Geo Pertea, Daniela Puiu, and Ela Pertea; and with collaborators including Mihai Pop and Lior Pachter.

  1. B. Langmead et al. Ultrafast and memory-efficient alignment of short DNA sequences to the human genome. Genome Biology 2009, 10:R25.
  2. C. Trapnell, L. Pachter, and S.L. Salzberg. TopHat: discovering splice junctions with RNA-Seq. Bioinformatics 2009 25(9):1105-1111.
  3. C. Trapnell, et al. Transcript assembly and quantification by RNA-Seq reveals unannotated transcripts and isoform switching during cell differentiation. Nature Biotechnology 28, 511-515 (2010).

Speaker Biography: Dr. Steven Salzberg is a Professor of Medicine in the McKusick-Nathans Institute of Genetic Medicine at Johns Hopkins University, where he holds joint appointments in the Departments of Biostatistics and Computer Science. From 2005-2011, he was the Director of the Center for Bioinformatics and Computational Biology (CBCB) and the Horvitz Professor of Computer Science at the University of Maryland, College Park. From 1997-2005 he was Senior Director of Bioinformatics at The Institute for Genomic Research (TIGR) in Rockville, Maryland, one of the world’s leading DNA sequencing centers at the time. Dr. Salzberg has authored or co-authored two books and over 200 publications in leading scientific journals, and his h-index is 83. He is a Fellow of the American Association for the Advancement of Science (AAAS) and the Institute for Science in Medicine, and a former member of the Board of Scientific Counselors of the National Center for Biotechnology Information at NIH. He currently serves on the Editorial Boards of the journals Genome Research, Genome Biology, BMC Biology, Journal of Computational Biology, PLoS ONE, BMC Genomics, BMC Bioinformatics, and Biology Direct, and he is a member of the Faculty of 1000. He co-chaired the Third (1999) through the Eighth (2005) Conferences on Computational Genomics, the 2007 and 2009 International Conferences on Microbial Genomics, and the 2009 Workshop on Algorithms in Bioinformatics.

Student Seminar

November 22, 2011

Human annotators are critical for creating the necessary datasets to train statistical learning algorithms. However, there exist several limiting factors to creating large annotated datasets, such as annotation cost and limited access to qualified annotators. In recent years, researchers have investigated overcoming this data bottleneck by resorting to crowdsourcing, which is the delegation of a particular task to a large group of individuals rather than a single person, usually via an online marketplace.

This thesis is concerned with crowdsourcing annotation tasks that aid either the training, tuning, or evaluation of statistical learners, across a variety of tasks in natural language processing. The tasks reflect a spectrum of annotation complexity, from simple class label selection, through selecting textual segments from a document, to composing sentences from scratch. The annotation setups were novel as they involved new types of annotators, new types of tasks, new types of data, and new types of algorithms that can handle such data.

The thesis is divided into two main parts: the first part deals with text classification, and the second part deals with machine translation (MT).

The first part deals with two examples of the text classification task. The first is the identification of dialectal Arabic sentences and distinguishing them from standard Arabic sentences. We utilize crowdsourcing to create a large annotated dataset of Arabic sentences, which is used to train and evaluate language models for each Arabic variety. The second task is a sentiment analysis task, that of distinguishing positive movie reviews from negative ones. We introduce a new type of annotations called rationales, which complement the traditional class labels, and aid learning system parameters that generalize better to unseen data.

In the second part, we examine how crowdsourcing can be beneficial to machine translation. We start with the evaluation of MT systems, and show the potential of crowdsourcing to edit MT output. We also present a new MT evaluation metric, RYPT, that is based on human judgment, and well-suited for a crowdsourced setting. Finally, we demonstrate that crowdsourcing can be helpful in collecting translations to create a parallel dataset. We discuss a set of features that can help distinguish well-formed translations from those that are not, and we show that crowdsourcing translation yields results of near-professional quality at a fraction of the cost.

Throughout the thesis, we will be concerned with how we can ensure that collected data is of high quality, and we will employ a set of quality control measures for that purpose. Those methods will be helpful not only in detecting spammers and unfaithful annotators, but also those who are simply unable to perform the task properly, which is a more subtle form of undesired behavior.

November 22, 2011

The next generation supercomputer of 10 Peta flops speed is now under the development as a national project in Japan. Not only the hardware development but also the software development is emphasized and the software development for the human body simulator is assigned as a grand challenge program for the effective use of the super-computer. In this program, the multiscale and multi-physics natures of the living matter are discussed. Under this concept, we are developing the simulation tools for organ and body scales with the continuum mechanics approach.

The software development using the patient specific data is highly expected for the next-generation medical treatment. In the preset talk, two kinds of simulators are introduced. First, as a medical application of ultrasound therapy, HIFU ( High Intensity Focused Ultrasound ) simulator is explained with the application of brain tumor treatment. Due to the presence of skull, the focus control of ultrasound field becomes considerably difficult without using the information of skull shape and thickness. Here, we utilize the CT data for skull and the time reversal method for wave equations is introduced to control the focal point. The numerical results illustrates that the simulation can be utilized to design the HIFU therapy. Next, a novel numerical method suitable for using medical images is explained. The method is based on the finite difference discretization of fluid-structure interaction problem by the fully Eulerian description. Although the developed method requires a relatively complicated mathematical treatment, it does not require mesh-generation procedure which is a big advantage when the software is introduced in medical institutions. Some examples of numerical simulations are shown with the detail validation of the method. Furthermore, the multiscale thrombosis simulator is explained with the current stage of the development of the numerical methods. In the present talk, the numerical model simulating the initial stage of thrombus formation is explained. The molecular scale interaction between platelets and vascular endothelium is taken into account through the stochastic Monte Carlo simulations. Then, the interacting force obtained from the Monte Carlo simulation is coupled with the continuum scale blood flow simulation using the above-mentioned FSI method. The results illustrate that platelets are much easier to aggregate on the wall in the presence of red blood cells and the effect of molecular interaction force are quantitatively discussed on the aggregation of platelets.

Finally, future direction of our research and development is also discussed.

Speaker Biography: * 1995, Doctor of Engineering, The University of Tokyo * 1998-2002, Assistant Professor, Dept. of Mechanical Engng, The University of Tokyo * 2002-2010, Associate Professor, Dept. of Mechanical Engng, The University of Tokyo * (2007- , Team Leader, Computational Science Research Program, RIKEN) * 2010- , Professor, Dept. of Mechanical Engineering, The University of Tokyo

December 1, 2011

Following a review of classical mathematical epidemiology, Epstein will present selected applications of agent-based computational modeling to public health, across a range of hazards and scales, including: (1) a playground level infectious disease model (2) a county-level smallpox model calibrated to 20th century European outbreak data, and used to design containment strategies (3) two city-level hybrid models (of New Orleans and Los Angeles) combining high performance computational fluid dynamics and agent-based modeling to simulate/optimize evacuation dynamics given airborne toxic chemical releases (4) an analogous hybrid LA model of agents and earthquakes, (5) a 300 million agent model of the United States, used to simulate infectious disease dynamics and emergency surge capacity at national scale, and (6) The Global Epidemic Model (GEM) developed for the National Institutes of Health to study pandemic influenza transmission and containment on a planetary scale.

Speaker Biography: Joshua M. Epstein, Ph.D., is Professor of Emergency Medicine at Johns Hopkins University, with Joint Appointments in Applied Mathematics and Statistics, Economics, Environmental Health Sciences, and Biostatistics. He is Director of the JHU Center for Advanced Modeling in the Social, Behavioral, and Health Sciences. He is an External Professor at the Santa Fe Institute, a member of the New York Academy of Sciences, and was recently appointed to the Institute of Medicine’s Committee on Identifying and Prioritizing New Preventive Vaccines. Earlier, Epstein was Senior Fellow in Economic Studies and Director of the Center on Social and Economic Dynamics at the Brookings Institution. He is a pioneer in agent-based computational modeling of biomedical and social dynamics. He has authored or co-authored several books including Growing Artificial Societies: Social Science from the Bottom Up, with Robert Axtell (MIT Press/Brookings Institution); Nonlinear Dynamics, Mathematical Biology, and Social Science (Addison-Wesley), and Generative Social Science: Studies in Agent-Based Computational Modeling (Princeton University Press). Epstein holds a Bachelor of Arts degree from Amherst, a Ph.D. from MIT, and has taught at Princeton, and lectured worldwide. In 2008, he received an NIH Director’s Pioneer Award, and in 2010 an Honorary Doctorate of Science from Amherst College.

Student Seminar

December 8, 2011

Verifying the integrity, authenticity and freshness of remotely stored data requires new, efficient, and scalable solutions. User expectations for ubiquitous and low-latency access to increasingly large amounts of data are forcing an evolution of the data storage and retrieval model. Data are routinely stored at and retrieved from locations that are not controlled by the original data source. New data verification approaches must similarly evolve to offset the risk of accessing data that has been modified in a manner unintended by the originating source.

The work for this talk extends data verification to meet the scalability and efficiency requirements of the evolving outsourced data model. First, the Cloud Authenticated Dictionary (CLAD) handles cloud-scale verification using an authenticated dictionary capable of managing billions of objects. Next, the Authenticated PR-Quadtree (APR-Quad) efficiently processes bulk updates and queries for multidimensional data. Finally, the Multi-producer Authenticated PR-Quadtree (MAPR) publicly authenticates data from multiple producers using a single proof.