Personal tools

FOME 2011

FOME’11 : Future of Middleware at Middleware’2011

13 December 2011, Lisbon, Portugal

Description

We are witnessing a period of rapid and intense change in distributed systems, at a rate that is unprecedented since the inception of the subject in the early 1980s. With the advent of cloud computing, for example, we can see the deployment of very large scale distributed systems offering a range of novel and exciting new services, including for example interesting new paradigms for large scale computation. Distributed systems are also becoming significantly more heterogeneous spanning very small devices embedded in the physical environment around us through to data centres housing massive cluster computers. In addition, users of distributed systems are often on the move resulting in significant context changes over time which the system must adapt to. Networking technologies also continue to evolve with, for example, the emergence of a range of new ad hoc networking techniques and peer-to-peer approaches to implementing core network services.

Middleware retains a core role in distributed systems, offering key properties such as abstraction, interoperability, openness and supporting a range of non-functional properties. However, middleware is coming under increasing pressure given the trends highlighted above:

  • What are the right abstractions for the development of future distributed systems given the scale of complexity of the underlying infrastructure? How can we abstract over this complexity? What do we need in terms of middleware APIs, programming languages and associated software engineering methodologies?
  • How do we achieve interoperability and openness in this new world we find ourselves in, especially given the extreme heterogeneity we encounter in the distributed systems of today? What principles and approaches do we need to deal with such extreme heterogeneity? Do existing approaches to interoperability and openness work in this new world we find ourselves in?
  • How do we achieve the desired level of dependability and security? Again, what principles and techniques do we need to achieve such non-functional properties and how do we embed such techniques within the architectures of our middleware platforms?

These are very difficult questions to answer and it is clear that the middleware community faces some major challenges over the next few years in keeping up with the pace of change. Given this, it is very timely to announce the inaugural Future of Middleware event (FOME’11) in conjunction with the ACM/IFIP USENIX Middleware conference, the premier conference in the area of middleware principles, architecture and engineering.

FOME’11 brings together a number of invited leading researchers in the field selected to offer comprehensive coverage of the key issues identified above. The aims of the event are to take stock of where we are in middleware, to address the key challenges facing the field, to establish an agenda for the next wave of middleware research and, most importantly, to stimulate researchers and build a community to address the significant challenges we face. In a nutshell, what should middleware look like in 2020?

We are excited by the event and by the speakers and topics that will feature in FOME’11. We look forward to meeting you in Lisbon.

The event is partly supported by the EC FET CONNECT project (http://connect-forever.eu/). 

Technical Program

All papers are available for download from SpringerLink.

Challenge of Developing Highly Complex Distributed Systems

  • R&D Challenges and Solutions for Highly Complex Distributed Systems: a Middleware Perspective
    Douglas C. Schmidt, Vanderbilt University, USA
    Joint work with Richard Schantz (BBN Technologies), Brian Dougherty and Jules White (Virginia Tech), Angelo Corsaro (PrismTechnologies) and Adam Porter (University of Maryland)

Highly complex distributed systems are characterized by a large number of mission-critical heterogeneous inter-dependent subsystems executing in concurrently with diverse--often conflicting--QoS requirements. Middleware is increasingly used as the computing and communication foundation for these highly complex distributed systems in domains such as air traffic control and management, electrical power grid systems, large-scale supervisory control and data acquisition (SCADA), telecommunications, and integrated health care delivery.  Researchers and developers of middleware for the current and next-generation of highly complex distributed systems must address following challenges: (1) Encapsulating heterogeneity at scale, (2) Supporting diverse QoS requirements, (3) Deriving valid, high-performance configurations of highly configurable infrastructure platforms, (4) Dynamic configuration and reconfiguration, (5) Ensuring robust extensible and adaptive programming and communication models. 

  • Developing Highly Complex Distributed Systems: A Software Engineering Perspective
    Paola Inverardi, University of L’Aquila, Italy
    Joint work with Marco Autili, Massimo Tivoli and Patrizio Pelliccione (University of L’Aquila)

What is a highly complex distributed system in the Future era? What are the needs that may drive the development of such systems? And what is their life cycle? Is there any new challenge for SE ? In this paper we try to provide a partial answer to the above questions by characterizing few application domains that we consider of raising interest in the next years. Our thesis is that there is a need to rethink the whole software process for such systems. The traditional boundaries between static and dynamic activities disappear and development support mingles with run time support thus invading the middleware territory.

  • Programming Language Impact on the Development of Distributed Systems
    Steve Vinoski, Basho Technologies, USA
    Joint work with Debasish Ghosh (Anshin Software Private Ltd.), Justin Sheehy (Basho Technologies) and Kresten Krab Thorup (Trifork)

Programming languages have long influenced the development of distributed systems. The past few decades have seen a continuing series of oscillations between distributed system homogeneity and heterogeneity from a programming language point of view. While much middleware and distributed systems code continues to be developed today using mainstream languages such as Java and C++, several forces have recently combined to drive a renewed interest in other programming languages. The result of these forces has been an increase in the use of programming languages such as Erlang, Scala, Haskell and Clojure that allow programming at a higher level of abstraction resulting in better modularity, enhanced speed of development, and added power of reasoning about systems being developed. Such languages can also be used effectively to develop embedded domain specific languages that can expressively and succinctly model issues inherent in distributed systems including concurrency, parallelism, and fault tolerance. In this paper, we first present a brief history of programming languages and distributed systems, and then explore several alternative languages along with modern systems built using them. We focus on language and application features, how problems of distribution are addressed, concurrency issues, code brevity, extensibility, and maintenance concerns. Finally, we speculate about the possible influences today’s alternative programming languages could have on the future of middleware and distributed systems development.

Highly Heterogeneous and Dynamic Distributed Systems Challenges

  • Middleware for Wireless Sensor Networks: An Outlook
    Gian Pietro Picco, University of Trento, Italy
    Joint work with Luca Mottola  (SICS)

In modern distributed computing, applications are rarely built directly atop operating system facilities, e.g., sockets. Higher-level middleware abstractions are often employed to simplify the programmer’s chore and to achieve interoperability. In contrast, real-world wireless sensor network (WSN) applications are almost always developed by relying directly on the operating system.  Why is this the case? Is it a problem? Does it even make sense to have a middleware for WSNs? And, if so, is it the same kind of software system as in modern distributed computing? What are the fundamental concepts, reasonable assumptions, and key criteria guiding its design? What are the main open research challenges, and the potential pitfalls? Most importantly, is it worth pursuing research in this field? This paper provides a (biased) answer to these and other research questions, preceded by a brief account on the state of the art in the field.

  • Dependability and Resilience in CyberPhysical Systems: A Middleware Perspective
    Nalini Venkatasubramanian, University of California Irvine, USA
    Joint work with Grit Denker, Nikil Dutt, Sharad Mehrotra and Carolyn Talcott

We address the role of middleware in enabling robust and resilient cyberphysical environments of the future.   In particular, we will focus on how adaptation services can be used to improve dependability in instrumented cyberphysical spaces based on the principles of “computation reflection”.  CPS environments incorporate a variety of sensing and actuation devices in a distributed architecture;  such a deployment is used to create a digital representation of the evolving physical world and its processes for use by applications such as critical infrastructure monitoring, surveillance and incident-site emergency response. CPS applications, in particular mission critical tasks, must execute dependably despite disruptions caused by failures and limitations in sensing, communications, and computation.

We will also discuss a range of applications (consumer and mission-critical), their reliability needs  and potential dependability holes that can cause performance degradation and application failures.  In particular, we distinguish between the notion of infrastructure and information dependability and illustrate how formal methods based approaches can be used to model, represent and reason about a range of CPS applications and resilience needs. We discuss semantic foundations to guide and develop specific adaptation techniques at different layers of the CPS environment (i.e. networking, sensing, applications, cross-layer) to achieve end-to-end dependability at both the infrastructure and information levels. Examples of techniques discussed include mechanisms for reliable information delivery over multi-networks, quality aware data collection, semantic sensing and reconfiguration using overlapping capabilities of heterogeneous sensors. The talk will also end with experiences in real world testbeds and pilot deployments that indicate the utility of the “reflective” approach and the cross-layer adaptation techniques to achieve dependability.

  • Applying Evolutionary Computation to Mitigate Uncertainty in Dynamically-Adaptive, High-Assurance Middleware
    Phil McKinley, Michigan State University, USA
    Joint work with Betty H.C. Cheng, Andres J. Ramirez and Adam C. Jensen (Michigan State University)

A robust and resilient software system must be able to monitor its environment, adapt to changing conditions, and protect itself from component failures and attacks. However, the designer of such an adaptive system is faced with a challenging set of tasks: anticipating how and when the system will need to adapt in the future, codifying this behavior in decision-making components to govern the adaptation, and ensuring system integrity during adaptation. These tasks are particularly difficult for systems that must operate safely in the face of continuous dynamics and environmental uncertainty. In the biological world, evolution has done a remarkable job of creating systems that easily adapt to their environment and survive highly adverse conditions. In this paper, we explore the integration of evolutionary computation into the development and run-time support of dynamically-adaptable, high-assurance middleware. The open-ended nature of the evolutionary process has been shown to discover novel solutions to complex engineering problems. In the case of high-assurance adaptive software, this search capability must be coupled with rigorous development tools and run-time support to ensure that the resulting systems behave in accordance with requirements. Results of early investigations are reviewed, and several challenging problems and possible research directions are discussed.

Very Large Scale Distributed Systems Challenges

  • Challenges in Very Large Distributed Systems
    Maarten van Steen, VU University Amsterdam, Dpt of Informatics, The Netherlands
    Joint work with Guillaume Pierre and Spyros Voulgaris (VU University Amsterdam)

Many modern distributed systems are required to scale in terms of their support for processes, resources, and users. Moreover, a system is often also required to operate across the Internet and across different administrative domains. These scalability requirements lead to a number of well-known challenges in which distribution transparency needs to be traded off against loss of performance. In this paper, next to briefly discussing these traditional challenges, we concentrate on three major ones for which we claim there is no easy solution. These challenges originate from the fact that users and system are becoming increasingly integrated and are effectively leading us to large-scale socio-technical distributed systems. We identify the design of such integrated systems as one challenge. Moreover, as users are so tightly integrated into the overall design, new issues with respect to long-term management emerge as well, which we identify as a second major challenge. Finally, in order to provide what may be coined as an invisible interface between users and core system, we will be facing a third challenge in the difficult trade-offs that need to be dealt with between users providing information and preserving their privacy.

  • Cloud Management
    Dejan Milojicic, Hewlett Packard Laboratories, USA
    Joint work with Nigel Cook and Vanish Talwar (Hewlett Packard Laboratories)

Cloud Computing offers a number of benefits, such as elasticity with perception of unlimited resources, self-service, on-demand, automation, etc. However, these benefits create new requirements for management of Cloud computing.  On the back-end, economic limitations dictate careful consolidation of servers with clear sustainability analysis; managed levels of abstractions are higher (from hardware, to VMs, to services); and reliability/availability/supportability is built into higher levels of systems and services. On the client-side, Cloud services have to be easy to use/manage, perform well, and be reliable. On both sides, geographical distribution and its implications on business continuity is a rule rather than exception; scalability is built-in by design; and QoS is still being defined. In this paper, we discuss new requirements and approaches to Cloud management. We then present few use cases for existing hardware, software, and service platforms. Based on these, we derive qualitative and quantitative conclusions about manageability of current platforms and then make predictions about the future of Cloud management. We expect these findings to help designers of next generation hardware and software platforms to develop more manageable systems and solutions.

  • On MapReduce Scheduling in Cloud Computing Environments
    Raouf Boutaba, University of Waterloo, Canada
    Joint work with Lu Cheng and Qi Zhang (University of Waterloo)

Cloud computing models, like MapReduce and Dryad, have become the dominant programming models for large-scale computing in the recent years. They allow jobs to complete much faster and job scheduling to be more robust to run-time exceptions common in large clusters. Several studies have analyzed production cloud computing environments and one important finding of these studies is that clusters and workloads are both heterogeneous. A cluster often consists of multiple generations of hardware. In addition, a virtualized data center, such as Amazon EC2, typically contains virtual resources with significant variations in performance. To maximize resource utilization, the clusters are shared by multiple jobs of different types and with different priorities. In particular, MapReduce jobs can span several orders of magnitude in terms of job length and size. A common problem caused by such heterogeneity is straggler tasks, which can significantly delay job completion. Another notable challenge is fairness since a large job can monopolize the entire cluster and as a result starve others. To coordinate resource sharing, preemption is often used, but incurs resource wastage.

This paper first provides an introduction to MapReduce and Hadoop, a popular open source implementation of MapReduce. Second, MapReduce scheduling and more specifically how job schedulers handle such heterogeneous environments will be discussed highlighting the research challenges. Third, this paper discusses techniques for mitigating the negative impact of preemption on heterogeneous MapReduce workloads.

Dependability & Security Challenges

  • Assuring Computation, a Middleware Defense?
    Roy Campbell, University of Illinois at Urbana Champaign, USA

Rapid technological advance, global networking, commercial off-the-shelf technology, security, agility, scalability, reliability, and mobility create a target of opportunity for reducing the costs of computation. But mission-critical cloud computing across hybrid (public, private, heterogeneous) clouds requires the realization of “end-to-end“ and “cross-layered” security, dependability, and timeliness. That is, computations and computing systems should survive malicious attacks and accidental failures; they should be secure; and they should execute in a timely manner. End-to-end implies that the properties should hold throughout the lifetime of individual events, e.g., a packet transit or a session between two machines, and that they should be assured in a manner that is independent of the environment through which such events pass. Similarly, cross-layer encompasses multiple layers from the end-device through the network and up to the applications or computations at the data center. A survivable and distributed cloud-computing-based infrastructure requires the configuration and management of dynamic systems-of-systems with both trusted and partially trusted resources (data, sensors, networks, computers, etc.) and services sourced from multiple organizations. To assure mission-critical computations and workflows that rely on such dynamically configured systems-of-systems, we must ensure that a given configuration doesn’t violate any security or reliability requirements. Furthermore, we should be able to model the trustworthiness of a workflow or computation’s completion for a given configuration in order to specify the right configuration for high assurances.

This paper discusses the architecture and design for middleware platforms to support assured cloud computing. It proposes approaches to analyze, reason, prototype and evaluate architectures, designs and performance of secure, timely, fault-tolerant, mission-oriented cloud computing. The paper discusses research into: 1) novel security primitives, protocols, and mechanisms to secure and support assured computations, 2) algorithms and techniques to enhance end-to-end timeliness of computations, 3) algorithms that detect security policy or reliability requirement violations in a given configuration, 4) algorithms that dynamically configure resources for a given workflow based on security policy and reliability requirements, and 5) algorithms, models, and tools to estimate the probability of completion of a workflow for a given configuration.

  • Another Look at the Middleware for Dependable Distributed Computing
    Mark Little, Red Hat, UK
    Joint work with Santosh Shrivastava (Newcastle University) and Stuart Wheater (Arjuna Technologies)

The paper starts by examining the role played by middleware in supporting dependable enterprise applications, beginning with CORBA to the present. Key concepts of reliable distributed computing developed during the eighties and nineties (e.g., transactions, replication) influenced the standards based middleware such as Java EE. However, the way enterprises are using networked computing for businesses is undergoing rapid changes as new ways of constructing distributed execution environments from varieties of resources, ranging from computational, storage, network to application level services, provided by globally distributed service providers are emerging. There will be strong emphasis on customisation of the dependability support within the middleware to address the individual needs of end-users. In light of these developments, the paper goes on to examine what core concepts, components and techniques that will be required in the middleware of the future.

  • Towards Application-driven Security Dashboards in Future Middleware
    Wouter Joosen, KUL, Belgium
    Joint work with Bert Lagaisse (KUL)

Trustworthiness has become a key requirement for new distributed applications that are being deployed over the Internet. Trustworthiness demands performance and availability, but also various security-related properties such as confidentiality, access control non-repudiation, and so forth. Ideally, these security properties are delivered out of the box, i.e., as built-in services in middleware that facilitates application development and deployment. The application specific trade-offs between the performance, the availability and the application security however are not a one-size-fits-all solution: they actually emerge from trade-off analysis during the requirements engineering and architecture creation phases of the application.

We argue that contemporary middleware must facilitate the customization of the built-in services framework, such that non-functional requirements emerging from the software and application engineering process are met. This must be achieved by facilitating adaptation and selection of appropriate services without carrying the load, footprint and overhead of a bloated middleware system. We illustrate the concept and approach with an example in the domain of security engineering of a large scale, internet based application in the domain of online document processing. We present an approach and a proof-of-concept of the above sketched view by developing application-specific security services that plug into basic middleware. First results show that such a custom security middleware can be generated cost-effectively (i.e. semi-automatically) and on demand, in principle enabling the traceability from requirements and architectural artifacts to deployed services in the middleware. In addition, we show that such an approach cannot only yield the desired variants of middleware security services, but also the tools to monitor and manage the actual security environment.

Future Usage of Distributed Systems

  • Middleware for Social Computing: A Roadmap
    Licia Capra, UCL, UK
    Joint work with Daniele Quercia (University of Cambridge)

Social computing broadly refers to supporting social behaviours by means of computational systems. In the last decade, the advent of Web 2.0, and its social networking services, wikis, blogs, social bookmarking, and the like, has revolutionised social computing, creating new online contexts within which people interact socially. With the pervasiveness of mobile devices and embedded sensors, we stand at the brink of another major revolution, where the boundary between online and offline social behaviours blurs, providing opportunities for (re)defining social conventions and contexts once again. But opportunities come with challenges: can middleware foster the engineering of social software? In this paper, we provide a framework within which we review almost a decade of research in the area of social computing, identify research challenges in engineering social software, and elaborate on how the middleware community can help address them.

  • Data Interoperability
    Massimo Paolucci, DOCOMO Eurolabs, Germany

Data interoperability is one of the main problems of system interoperability.  Indeed, it is estimated that the cost of data interoperability ranges in the billions of dollars every year.  The traditional approach to data interoperability is to define mappings between different data structures and different data formats.  Whereas this is surely a very important part of the problem, it is not its only aspect.  Often systems need to aggregate data coming from different systems, and to reason and derive conclusions from this data.  In this paper, we will review the efforts performed in the semantic web to unlock this problem and highlight trends and pitfalls.

  • Mission-oriented Middleware for Sensor-driven Scientific Systems
    Simon Dobson, University of St Andrews, UK
    Joint work with Alan Dearle (University of St Andrews)

Modern scientific experiments are making more use of intelligent front-end collection and analysis, typically using sensors and sensor networks. Co-ordinating such networks presents a significant challenge: they are typically power- and resource-constrained, operating in noisy and hostile environments, and need to adapt their behaviour to match their operation to their sensed environment while maintaining the scientific integrity of their observations. In this paper we explore several aspects of managing and co-ordinating sensor-driven systems, taking on issues that straddle the middleware, programming language and formal methods domains. We focus in particular on component-based, channel-oriented systems deployed on long-term scientific missions under autonomic control, and study how we can use improve the levels of assurance possible for the adaptive behaviour of these systems. We draw broader conclusions about the use of middleware, the integration of mission descriptions and data provenance into the overall scientific workflow.

Programme

  • 09:00-09:15: Welcome (Gordon Blair & Valérie Issarny)
  • 09:15-10:30: Challenges of developing highly complex distributed systems
    3 presentations of 20’ each
    Discussion
  • 10:30-11:00: Break
  • 11:00-12:15: Highly heterogeneous and dynamic distributed systems challenges
    3 presentations of 20’ each
    Discussion
  • 12:15-13:30: Lunch Break
  • 13:30-14:45: Very large scale distributed systems challenges
    3 presentations of 20’ each
    Discussion
  • 14:45-16:00: Dependability & security challenges
    3 presentations of 20’ each
  • Discussion
  • 16:00-16:30: Break
  • 16:30-17:45: Future usage of distributed systems
  • 3 presentations of 20’ each
  • Discussion
  • 17:45-18:00: Closing

Program co-chairs

  • Gordon Blair (Lancaster University)
  • Valérie Issarny (INRIA)