c-01 - All right now? c-02: - Need some comments. "In this work we propose a methodology to improve service provision by modeling in a neutral formalism existing choreographies which implement workflows." "We chose Petri Nets, which are very well understood and expressive enough for a wide range of purposes." "Together with monitoring data, the Petri Net model is used to generate a dynamic workflow model." "Actual provision parameters are then used in a dynamic model simulation." "Different analyses can be made using existing simulation tools for dynamic models." c-05: concurent -> concurrent "Petri Nets are frequently used as a formalism to describe concurrent computations, including business processes". "In a Petri Net, places (circles) and transitions (squres) are connected and tokes travel between them." c-06: "When representing a workflow, transitions correspond to activities and places to their pre- and post-conditions" c-07: "Therefore, Petri Nets can be used to represent the execution of simple, sequential BPEL processes." "Branches and loops can be also represented." c-08: "Branching and looping probabilities are estimated by means of monitoring." "Other, more complex patterns, can be expressed." c-09: "In usual Petri Net models, tokens stay in places until transitions are fired. The actual time to travel to the next place is not taken into account." "This is an oversimplification introduced by the Petri Net model" c-10: "Therefore we introduce continuous time to recover the dynamics of the initial system." "Transitions correspond to stocks" "Places correspond to token flow in an out of transitions" "Transition times are assumed to follow a statistical distribution - exponential decay, in our case." c-11: 2:33 "The dynamic workflow model is a component of the overall simulation framework." 2:46 "Another component of the simulation framework represents the provider-side parameters and resources." c-12: 2:56 "Different input regimes can be used to describe various scenarios for arrival of client requests." 3:05 "The stock of incoming requests models the process of queing the requests and starting the workflow instances." c-13: 3:14 "The blocking factor models the overload of the provision resources." 3:23 "Requests that wait too long to get served are rejected." c-14: 3:33 "To assess basic QoS of the wokflow, we can look at the succes and failure rates from the workflow model." 3:41 "Modeling the workflow and modeling the provision resources can be done in parallel and rather independently." c-15: 3:53 "Here is a simple demonstration of how the orchestration provision model can be used." c-16: 3:56 1 "This is a full model for a sample orchestration." 4:01 2 "We can zoom in to look at some details." 4:03 3 "The input regime and the request stock." 4:06 4 "The model of the orchestration with flows for places and stocks for transitions." 4:14 5 "The success, failure and rejection flows." 4:17 6 "Back to the overall view, we can start the simulation." 4:26 7 "We can check out different model elements and their value graphs." 4:30 7.1 "Curves are sketched inside stocks, and model parameters have adjustable sliders." 4:40 7.2 "Let's take a closer look at the stock of incoming requests." 4:50 7.3 "Let's also see the number of executing instances of an activity." 5:07 8 "Another view gives the number of executing component services through time." 5:18 9 "And yet another counts the percentages of successfully served and rejected requests so far." 5:26 10 "Let us change the input regime. Instead of a constant number of requests per second, let us assume a temporary peak period with tripled load." 5:48 11 "The simulation is rerun to show the new state." 6:05 12 "Let us also turn on the mechanism for scaling resources (the number of threads) up or down to meet the load..." 6:14 13 "... while allowing the model to block incoming requests when all resources are used up." 6:20 14 "The model now starts behaving in a more chaotic and non-intuitive fashion." 6:28 15 "This is the number of executing component services as the system struggles with the load." 6:49 16 "This is a very unfavorable scenario with many requests rejected before getting served." 7:03 17 "Let us see what could be expected if we decrease the time for up-scaling the available number of threads. It was 300 seconds before and we reduce it to 30 seconds." 7:32 18 "The behavior is less chaotic, but the system still does not adapt too well." 7:46 19 "We suspect that the problem may be caused by several long-running human-provided services which present a bottleneck. We had two of them." 8:02 20 "Let us decrease their expected running times and see what happens." 8:29 21 "Our suspicion is confirmed by improvements in service counts and the enhanced performance trends." c-17: "Thanks for your attention." c-18-credits.mp4 - Make last longer at the end - Make S-Cube more prominent (fading to S-Cube background is OK) --------------------------------------------------------------------------- music too loud @ 1:53 c-16-s-10 -> 2 secs right c-16-s-11 -> 3 secs right c-16-s-15 -> 5 secs right c-16-s-16 -> 3 secs right c-16-s-19 -> 3 secs right c-16-s-21 -> 5 secs right --------------------------------------------------------------------------- c-16-s-15 -> 2 secs right c-16-s-19 -> 2 secs right c-16-s-21 -> 3 secs right NEW: c-16-18.5 7:48 "The percentage of rejected requests has improved, but it is is still very large. In fact, larger than the percentage of served requests."