:::: MENU ::::

Accepted Tutorials

Azure Machine Learning for Research, Kenji Takeda, Microsoft

Microsoft Azure Machine Learning<http://azure.microsoft.com/en-us/services/machine-learning/> is a new platform that makes it easy to use machine learning technology for your research and applications. In contrast to traditional tools, Azure Machine Learning can be mastered without extensive technical training and it facilitates collaboration with colleagues. It provides:

  • The capability to visually compose machine learning experiments;
  • Access to proven algorithms from Microsoft Research, Bing, and Xbox;
  • First-class support for R, enabling you to seamlessly bring in existing work;
  • Unmatched ease of collaboration; simply click share my workspace and share experiments with anyone, anywhere;
  • Tools to immediately deploy a predictive model as a learning web service on the cloud;

The aim of this workshop is to provide you with;

  • An understanding of Azure Machine Learning and how it can be
  • Hands-on experience designing, building, training, evaluating and deploying predictive models;
  • An opportunity to discuss you current and future needs with Microsoft and Microsoft Research.

Attendees will be able to access Microsoft Azure Machine Learning on their own laptop during the training and for evaluation purposes for up to three months after the event. The attendee’s laptop does not need to have the Windows operating system installed—Microsoft Azure is accessed via your Internet browser. Microsoft is offering Azure Machine Learning Awards to researchers students, from all disciplines, that focus on solving real problems for the benefit of society and teaching data science courses. You can find out more at the workshop, and apply online here<http://research.microsoft.com/en-us/projects/azure/ml.aspx>.

Model-Driven Management of Multi-Cloud Applications, Danilo Ardagna

Multi-cloud deployment is an emerging paradigm in cloud computing. Deploying applications concurrently on multiple clouds enables them to be more resilient to failures and quality-of-service degradations that may happen in a given cloud. It also alleviates the cloud vendor lock-in problem. However, deploying on multi-clouds comes at the expense of a more complex technology stack for cloud management. In this tutorial, we will introduce a deployment and run-time management platform developed as part of the EU project MODAClouds (http://www.modaclouds.eu/), a large- scale European FP7 project focused on model- driven design and runtime management of multi- cloud applications.

The proponents have experience in presenting tutorials at conferences such as ACM SIGMETRICS, IEEE/ACM ASE, ACM/SPEC ICPE and the MICAS workshop. The material to be presented in this tutorial is largely original and focuses on the runtime management of multi-cloud applications developed with the MODACloudML modelling language. The authors plan to give a tutorial at IEEE/ACM ASE about designing and developing multi-cloud applications; instead, the UCC tutorial proposal focuses on deployment and runtime management of these applications, thus the overlap with the ASE tutorial would be minimal.

The Intercloud Architecture and Project, David R. Bernstein,

The IEEE has two active projects in the area of cloud to cloud Interoperability (“Intercloud”). One is a Standards Association activity called the P2302 Standard for Intercloud Interoperability and Federation (SIIF). The other is an Industry Connections activity called the IEEE Global Intercloud Testbed Project. The Standards working group has approximately 100 participants. The Testbed project has approximately two dozen member companies and subject matter experts. The goal of the Testbed is to actually code and set up an open, working “Intercloud” as specified by the Standards working group. The Intercloud system is an ambitious architecture intended to bring a transparent, globally federated architecture to the world where clouds of any type can interoperate with any source of resource. The system utilizes a semantic resource directory technique and a common channel signaling network to implement the federation. This mechanism is the same approach that the global phone system uses (SS7/IN) and that the global Internet uses (AS/IP Routing). This tutorial will detail the Architecture of the developing Intercloud system including both the topological elements of Roots, Exchanges, and Gateways as well as componentry such as the signaling network, trust, naming, security, semantic directory and solver, and SDN based federation mechanisms. The presenter has given this type of “master design session” with both the Working group and the Testbed teams, and it is a great way to get up to speed on this complex and advanced system quickly.

Distributed Data Storage: From Dispersed Files to Stealth Databases, Josef Spillner, Johannes Müller

Elastic cloud storage services have revitalised network storage techniques. The convenience of using them is attractive to many users. What holds back adoption are concerns about security, privacy, reliability and other quality factors. These risks have been converted into challenges by researchers who investigate flexible user-controlled storage systems such as file storage integrators and databases. The tutorial introduces research results and prototypical tools for next generation distributed cloud storage and processing applications.

Data Analysis with R, Shruti Kohli

Coming soon…

Microsoft Azure for Research, Kenji Takeda

Microsoft Azure<http://azure.microsoft.com/en-us/> is a general, open, and flexible global cloud platform supporting any language, tool, or framework – including Linux, Java, Python, and other non- Microsoft technologies. It is ideally suited to researchers’ needs across disciplines. The tutorial is intended specifically for active scientists who can code, who will soon code, or are interested in coding in a modern computing context. Attendees will be able to access Microsoft Azure on their own laptop during the training and for evaluation purposes for up to three months after the event. The attendee’s laptop does not need to have the Windows operating system installed—Microsoft Azure is accessed via your Internet browser. This workshop will allow you to:

  • Gain an understanding of cloud computing and why and when you would use it in scientific or other research;
  • Acquire hands-on experience in the major design patterns for successful cloud applications, including virtual machines, web sites, cloud storage, big data, streaming data, and visualisation;
  • Develop the skills to run your own application/services on Microsoft Azure.

Attendees will learn more about applying for our Azure Awards that offer up to 200,000 compute hours and 20TB for research projects. See http://www.azure4research.com for more details.

Autonomic Clouds, Omer Rana and Manish Parashar, Omer F. Rana

Cloud computing continues to increase in complexity due to a number of factors: (i) increasing availability of configuration options from public Cloud providers (Amazon, for instance, offers over 4000 different configuration options); (ii) increasing variability and types of application instances that can be deployed over such platforms, such as tuning options in hypervisors that enable different virtual machine instances to be associated with physical machines, storage, compute and I/O preferences that offer different power and price, to operating system configurations that provide differing degrees of security. This complexity can also be seen in enterprise scale datacenters that dominate computing infrastructures in industry, which are growing in size and complexity, and are also enabling new classes and scales of complex business applications.

Autonomic computing offers self* capabilities that enable selfmanagement of systems. Proposed as a vision by IBM Research, the concepts in Autonomic Systems are much older and can be applied to each component within a Cloud system (resource manager/scheduler, power manager etc), or could be applied within an application that makes use of such a Cloud system. Understanding where such capability can be most effectively used is a decision variable often hard to fully appreciate and explored in this tutorial.

This tutorial is divided into three parts. It is intended to be accessible to audiences at all levels. The first part is introductory and intended to give an overview of autonomic computing and why this could be useful in managing and using Cloud systems. The second part then discusses a number of techniques from autonomic self management that could be used for Cloud systems management. The third part provides implementation details about how autonomic systems could be implemented using CometCloud. This stage also discusses specific use cases.

Part 1: is intended to identify how autonomic techniques could be used (in general) within Cloud systems. Autonomic principles based on self* properties are outlined, along with a more critical assessment of current maturity of these techniques. Concepts of self stabilisation, viability zone (identified by Ashby) and recent work in machine learning techniques are outlined as core concepts that can be made use of in Cloud systems. A key objective in this part of the tutorial would be to identify where autonomic computing could be of most significance in a general Cloud architecture, specifically focusing on IaaS and PaaS. The discussion is centered on the emerging complexity of many Cloud computing systems which have a number of potential variables of interest that need to be considered by systems managers. The presenters will aim to convey the idea that some of these variables are often difficult to manage manually and their choice needs to be supported through autonomic techniques.

Part II: covers techniques that could be used to achieve autonomic self-management.
This part starts with the particular aims that need to be identified by a systems or application administrator — essentially these aims outline what characteristics a Cloud system should adhere to during operation.
These can be captured in a Service Level Agreement between a client and a provider or an agreement that is maintained internally by a Cloud provider to ensure compliance with various service level objectives, which may be performance related (e.g. response time, down time, availability), power related (e.g. number of VMs/machine, number of active machines) or security related (e.g. security audit level supported). A number of mechanisms are outlined that may be used to maintain such service levels within predefined constraints also identified by a system admin or application user.

Part III: discusses how an autonomic Cloud computing system could be constructed using CometCloud. The core concepts in the construction and use of CometCloud are presented, along with examples of use in science and engineering applications. Autonomic self healing capability in CometCloud, along with multi-site federation (to support load balancing of requests and a market based task allocation) are used to discuss mechanisms identified in Part 2 of the tutorial.

Help Clinical Intelligence take the next step using state-of-the art in-memory analytics, Oliver Vettel and Andreas Koop, Roche

Join us in this industry tutorial and contribute ideas to help take Clinical Intelligence to the next level: how should we tackle ever-changing data models in a validated environment; how could we include genomics or proteomics data to stratify patients in a clinical trial; how do we take the next step towards real-life data science? Together with Andreas and Oliver from Roche and Ian and Dominic from our team at the university of Derby, this is your chance to leave a mark & help improve the life of cancer patients around the world. Join us in doing now what patients need next.

CRISTAL : Designing Traceable Cloud-based Systems, Richard McClatchey, Andrew Branson

Providing the appropriate level of traceability to elements of data or processes (‘Items’) in large volumes of data, often Cloud-resident, is an essential requirement in the Big Data era. Enterprise-wide data systems should to be designed from the outset to support usage of such Items across the spectrum of business use rather than any from specific application view. The design philosophy advocated in this tutorial is to drive the design process using a so-called ‘description-driven’ approach which enriches Cloud-based data models with meta-data and description and focuses the design process on re-use of Items thereby promoting system evolution, maintenance and integration with legacy systems. This tutorial will introduce the description-driven design (DDD) philosophy, detail the CRISTAL-ISE Open Source DDD software and present evidence of DDD in data systems at CERN, in health informatics and in business process management and an evaluation given of its use and benefits. Attendees will receive the CRISTAL-ISE software and supporting documentation on a complimentary USB stick.

Call for Tutorials (Call Closed)

The 7th IEEE/ACM International Conference on Utility and Cloud Computing (UCC 2014) will be held in London, UK from December 8-11, 2014. Tutorial proposals are invited for UCC 2014 on specific aspects of Utility and Cloud Computing, particularly relating to the subject areas indicated in the list of topics below. We seek proposals across a wide range of topics and levels — ranging from fundamentals to the latest advances in hot topic areas.

Topics of interest include (but are not restricted to):

  • Techniques for the exploitation of Big Data and Analytics in the cloud
  • Programming models, languages and tools
  • Architectural models to achieve Utility in Clouds such as high availability and federation
  • Designs and deployment models for Clouds: private, public, hybrid, federated, aggregated
  • Cloud Computing middleware, stacks, tools, delivery networks and services at all layers (XaaS)
  • Virtualisation technologies and other enablers
  • Energy efficiency
  • Data or computationally intensive applications in the cloud (e.g. bioinformatics, geoinformatics, and healthinformatics)
  • Data distribution and I/O
  • Resource management: algorithms, brokering, scheduling, capacity planning, elasticity, instance distribution and marketplaces
  • Mobile Clouds
  • Cloud management: autonomic, adaptive, self-*, SLAs, standards, policy models/languages, performance models and monitoring
  • Beyond technology: Cloud business and legal implications, such as security, privacy, trust and jurisdiction especially in Utility contexts
  • Economic models and scenarios of use

Important Dates

Tutorial Proposals Due: 18th July, 2014  August 30, 2014 (Extended Deadline)
Notification of Acceptance: 29th August, 2014
Final Description: 26th September, 2014
Tutorial Slides Due: 28th November, 2014


Tutorial proposals and any enquiries should be sent by e-mail to the tutorial chairs:

  • Sushil K. Prasad, sprasad {aT} gsu.edu
  • David Wallom, david.wallom {aT} oerc.ox.ac.uk


Proposals should be submitted in PDF or .docx format.

Proposal Requirements

Proposal Format: Tutorials are half a day (3 hours) in length. Each tutorial proposal must contain the following:

  1. Title
  2. Name and Affiliation of the Speaker(s)
  3. Abstract (one paragraph, including previous experience with such tutorials)
  4. Intended Audience (one paragraph): Describe the background assumed of tutorial attendees, and any requirements needed (eg., Bring own laptop)
  5. Learning Outcome (one paragraph): Describe the benefit, knowledge or skill that will be gained by attendees.
  6. Description (no more than 2 pages): A statement giving clear motivation/justification for the topic to be presented at UCC 2013 and a comprehensive outline of the proposed content.
  7. Materials (one paragraph): A description of materials to be provided to attendees on the conference website – course slides, annotated bibliography, code snippets, etc. NOTE: the materials themselves do not need to be provided in the proposal.
  8. Bio-sketch: A single paragraph bio-sketch per tutorial presenter.


Materials for the tutorial must be emailed at least 7 days before the presentation date. The UCC 2014 Conference Organizing Committee will be responsible for the following:

  • Providing logistics support and a meeting place for the tutorial.
  • In conjunction with the organizers, determining the tutorial date and time.
  • Providing copies of the tutorial materials to attendees.