To help combat the complexity of today’s global, 24/7 service-centric marketplace, organizations implement architecture to help them achieve better business agility and greater interoperability. Organizations leverage the power of Microsoft BizTalk® Server as a trusted solution for seeking a new generation of dynamic applications that will provide quicker process change, improved business insight and a competitive advantage.
With the growing use and capabilities of BizTalk Server, it can be challenging to capitalize on the full potential of the product. Enterprises struggle with the increasing cost of building, maintaining and upgrading BizTalk’s integration solutions. They need to address the short-term and long-term value of the platform and have the right IT strategy for their business needs to enable broader cost savings.
To support companies that run any version of BizTalk, MPS Partners offers ongoing support of the server, providing a strategic and flexible delivery model to address specific needs. As a part of this offering, we:
- Manage the BizTalk implementation
- Establish governance
- Provide ongoing technical support
- Outline an adoption and deployment strategy
- Review infrastructure, software configuration, application development or other key architectural elements
- Mentor internal resources on best practices and technical approaches
- Build artifacts or enhance existing artifacts
- Create and maintain and issues and resolution knowledgebase
Customers purchase a block of hours each month to use as they wish for any level of BizTalk planning, development or support. We work closely with organization’s existing internal IT experts to ensure BizTalk server implementations are designed, developed and monitored in the best way possible.
Master Data is the lifeblood for any organization that uses an Enterprise Resource Planning (ERP) system such as SAP ERP. In this blog series we will discuss some of the challenges associated with creating and managing master data and in particular material master data.
This first blog in the series will provide some basic background and definitions for some of terms we will be using throughout this blog series. This is especially important for those folks who may not be familiar with all the business processes where material master data is used.
Most successful companies use enterprise systems to complete the work needed to achieve their goals whether it is creating and delivering products or providing services to their customers. Companies have spent enormous amounts of capital to implement enterprise systems such as SAP ERP to allow them to manage a business process from beginning to end in an integrated and consistent manner.
One of the main goals for implementing an enterprise system is to improve efficiency by removing major obstacles to sharing and accessing information across functional areas. SAP ERP is one of the most complex Enterprise Systems and focuses on operations that are performed within an organization.
It is quite common to refer to a specific set of functionality as a functional module in SAP. For example, a person well-versed in Material Management is called a MM Expert or an SAP MM Functional Expert. There are many modules in SAP that are designed to perform a specific business function within an organization. Some of the examples are Production Panning (PP), Material Management (MM), Sales and Distribution (SD), Financial Accounting (FI), etc.
It is also important to get a basic understanding of how the data is stored and organized in SAP ERP. There are basically three different types of data namely organizational data, master data, and transaction data. In this blog we will focus on Master Data and in particular Material Master Data.
Material Master Data is the most utilized and hence most important data in SAP and is used to store all the data for a material in SAP. This data is also made available to a number of processes such as Material Management, Procurement Process (Procure to Pay), Fulfillment (Order To Cash), Inventory and Warehouse Management (IWM), Production Planning (PP), Shipping and Quality Management.
In order to manage a vast amount of data, material master data is organized in different tabs or categories commonly referred to as Views. Each view stores data specific to a particular business function.
For example, views related to a purchasing process are basic data, financial accounting, purchasing, and plant/storage data. The views related to fulfillment process are basic data, sales organization data, and sales plant data. The views related to Production are Material Resource Planning (MRP) and work scheduling for each plant. Similarly, the view related to Warehouse and Inventory management is Warehouse Management.
Figure 1: Material Master Views/Categories
As you can see from the above diagram, the Material Master data is comprised of data that relates to a number of different areas. For materials to be used successfully, the data has to be collected and entered into the Material Master record.
Not only does the data have to be entered for those areas but also for the specific organizational areas: plants, storage locations, sales organizations, and so on. For example, a material cannot be purchased without the relevant purchasing data being entered.
In addition, there are different type of Materials each with its own set of complex business rules. The most common material types are Finished Goods (FERT), Raw Material/Components (ROH), and Semi-finished Goods (HALB).
Now let’s discuss the Master Data that is relevant for Production Order Process. These are bills of material (BOM), work centers, product routings, and production version. BOM is also sometimes referred to as a recipe depending on the industry. BOM identifies the components that are necessary to produce a material. Routings identifies the operations needed to produce the material. Work centers, as the name suggests, are where the operations are to be performed. A Production version combines a BOM and routings for a material.
Finally, I’ll cover the Master Data that is needed for costing a material. The Master Data for product costing includes (Material master data [Costing view, MRP views], BOM and Routings).
The first step is to create material cost estimates. The Material cost estimate is calculated for all the in-house manufactured products (finished goods and semi-finished goods). The system accesses the Bill of Material and Routing for the finished goods. It is used to calculate the cost of goods manufactured and cost of goods sold for each product unit. Steps that follow the cost estimate are marking and releasing the cost estimate. Finally, the last step is to indicate in SAP that the material is live.
This completes a brief overview of the material master data and some of the major processes that are involved in an end-end process of creating a material in SAP. In the next blog we will discuss some of the challenges associated with creating and managing material master data in SAP ERP.
When you hear the words Agile and Business Intelligence (BI) in the same sentence, what comes to mind? Is it simply applying agile methodologies and concepts to your BI projects or is it something more? Agile is defined as: “able to move quickly and easily”. It is characterized by being “quick, resourceful and adaptable”. So what does this mean with respect to BI? It means quickly responding to constantly changing information and analytic requirements. It means visualizing data to quickly discern insights and increasing the quality of decisions. It means having the tools and platforms in addition to appropriate practices in place to enable agility.
In order to understand the impact of agile BI on an organization, it is necessary to establish a foundation on how current BI solutions are developed and deployed and how traditional BI toolsets interact with these solutions. The following diagram depicts the classic approach to developing and deploying BI solutions from a data perspective:
The primary issue with this approach is that there is typically a long cycle time between when the requirements are defined, data sources are identified and a usable deliverable is produced. This data process is often overplayed with classic waterfall approach.
The waterfall approach is a sequential process that is used in software development but can also be applied to BI solution, data warehouse and data mart development. A top down waterfall approach starts with comprehensive requirements gathering and data source definition before proceeding with design and development. Conversely, a bottom up waterfall approach focuses on individual data marts and deriving the data source requirements.
A downside to the waterfall approach is that it contains a large amount of inertia, which precludes rapid changes in direction or design approaches. That is not to say that it should be completely abandoned for enterprise wide BI solutions, but that it should be augmented to increase efficiency.
Another concern is that in a lot of instances, BI data may not be housed in enterprise systems. In fact, 76% of all data needed to create BI reports already exists in standalone databases or spreadsheets. This means that content producers and information consumers may be able to gain valuable insights without extracting information from enterprise systems.
Also important to note is that users demand more from IT departments with shrinking resources. They ask for self-service platforms and reduced cycle times for BI solution deployments. They want advanced data visualizations that enable faster tactical and strategic decision-making. In order to address these challenges, the traditional BI approach needs to adopt a more agile framework and platform.
Agile BI Methodology
The agile BI process can be depicted as:
This process, unlike waterfall, allows users to:
- Identify smaller units of work and volumes of data, adding value throughout the process instead of waiting to develop an entire data mart.
- Develop and deploy BI artifacts (reporting and analytics) alone or in workgroups.
- Achieve insights earlier via data analysis and review instead of spending time defining a complex data integration process.
- Adapt the process to specific needs and benefit from the flexibility of the Definition, Discovery, Evaluation, Deployment and Verification steps.
The Agile BI Platform
It is critical to acknowledge that agile BI is not just the implementation of an agile approach. The platform is also an important component that can lead to the success or failure of the implementation of an agile BI environment. Evaluate solutions based on ease of implementation versus overall value.
The selected platform should address the following platform features and functions at each stage of the agile BI methodology:
With the need for rapid results and adaptability, agile BI platforms should deliver incremental value within a short duration. Ensure that the agile BI platform allows for social collaboration across multiple audiences and business functions of different sizes and domain knowledge. This will reduce risk and cost because issues will be surfaced earlier due to the smaller units of work.
Considering that the content creator/information consumer has direct access to a large quantity of data, the BI platform should have the ability to easily connect to multiple heterogeneous data sources simultaneously. Consumers and content creators should be able to access and explore data with minimal IT intervention.
Data visualizations are key to agile BI platforms. The platform must produce easy-to-understand charts and graphs in order to provide additional insights and facilitate data exploration. Questions asked of the data can serve to define an overall enterprise architecture at a later date. This also means that there will be a much deeper collaboration between the content creator, information consumer and IT (co-location).
The agile BI platform should facilitate fine grain deployment to audiences of any size and support the social interaction of the audience as the deployed BI artifacts (reports, visualizations, etc.) are reviewed and discussed as the verification process commences. This further enhances and focuses collaboration, which can lead to better decisions being made at a much faster pace.
The most important step in the above process is closing the loop between “Verification and Definition” (Re-Definition). It is imperative that this is performed in order to adapt to constantly changing information and analytic requirements.
The deployment of an agile BI environment is a journey comprised of agile methodology and a supporting BI platform. It is not necessarily a monolithic environment, however. There may be various best of breed components of a well-functioning agile BI environment, providing specialized capabilities to fulfill specific business needs. The agile BI environment should continually evolve based on changes in information and analytic requirements, market conditions, organizational DNA and unforeseen future needs.
Business Critical Line-Of-Business (LOB) Data in SharePoint 2013 – Using Winshuttle for no-code or minimal-code approach
In this part 3 of my blog series, I’ll discuss a minimal code approach to integrate SAP with SharePoint. This approach would mean utilizing a tool or toolset to provide an end-to-end solution. I’ll further discuss how we can use the Winshuttle suite of products to enable integration of SAP with SharePoint 2013.
Using Winshuttle, one can create an end-to-end solution rather than just creating services that expose SAP data. Winshuttle has products that allow you to connect to multiple SAP touch points such as SAP Transactions, BAPI’s, and tables and expose the data as a web service. There are lots of tools in the market that allow you to connect to SAP BAPI’s and tables but Winshuttle is unique that it allows you to record entire SAP transactions such as MM01 to create a material in SAP. This allows a non-technical person well-versed in creating SAP transactions, such as a Master Data Analyst, to create a powerful web service that exposes a complete business process in a matter of few minutes.
A separate set of products from Winshuttle then can be used to not only create an electronic form that can be used to consume the web services but also design the business workflow process using a designer tool. In the end you can integrate most SAP business process without writing a single line of code. For example, you can create a SharePoint portal that can integrate SAP Master Data with SharePoint and further allow users to participate in a workflow.
A vast majority of solutions can be implemented by someone who is well versed in the tool and has good understanding of SAP LOB data. This is a significant benefit as it reduces the constant dependence on IT, and the ability to have more self-service type solutions makes business more agile. As a result, the savings are significant as even the simplest self-service solution can save countless hours that would be spent otherwise in manually entering the data in SAP.
Now anyone who has been involved in creating SAP integration solutions for the Microsoft stack of products knows very well that no two SAP integrations are the same and that no product in the market will be able to solve all the business problems. Ideally you want all your customers to build their requirements around the strengths of an enablement platform such as Winshuttle. However in reality that’s not always the case. Very frequently you will run into issues where a product at-least out of the box is not going to solve all your needs.
This is where Winshuttle is different. Yes, it allows you to create moderately complex no-code solutions but for those advanced application-like scenarios, the platform allows more advanced users such as IT programmers to extend the solution. The person implementing the solutions doesn’t necessarily have to be an IT Programmer but, it will certainly help to have such a person implement more advanced solutions as they will require some custom coding. As a result, extremely powerful SAP data automation solutions can be created using Winshuttle with some minimal coding.
It is no longer acceptable to wait months to build a data warehouse, create reports or add new insights to a dashboard, the world and business are moving too fast. Today, customers already have solutions in place that they are using to provide insights to most of their questions. In fact 83% of data already exist in some reportable format. The rest of the information they are looking for also exists but is typically stored in local databases or spreadsheets. The goal of an Agile BI solution is one that can quickly connect the existing BI solution and these new databases or spreadsheets to deliver new insights.
OLD DOG, NEW TRICK
The traditional BI approach involves defining business requirements at the entity level, building and architecting a data model, configuring an ETL process and then building reports and dashboards over the data. An Agile BI approach starts with the data visualization and leverages existing data sources regardless of format or structure. It leverages the cloud for quick results and focuses on the user experience and collaboration.This Agile BI approach allows users to see results in day or weeks, not months or quarters.
We are not saying you should build your data warehouse on standalone spread sheets or databases, but as new insights are being defined and tested, a more agile approach is required to quickly validate assumptions and deliver results. In the last two years we have seen an explosion in the Line of Business driving business intelligence solutions. In most instances these people have a deep understanding of the data and they have already built interfaces or created extracts for the information they need. The goal when working with these LOB users is not to build the perfect data warehouse. The goal is to leverage what is already in place and deliver results. Most companies have all the traditional reports and dashboards already in place to run the business. The new insights LOB are looking for involve combining the existing solution with new data sources.
Cloud solution, Public and Partner Data Sources are offering a completely new opportunity to uncover new insights. Companies today are finding value by combing existing data with 3rd party or public data. Census data, retail data and consumer data all offer new ways to look at the business and deliver value. New data sources and tools offer opportunities for Line of Business users to validate assumptions and uncover business value by leveraging Agile BI.
Creating documentation is one of the more tedious aspects of the software development process. Not that writing algorithms and object-oriented class hierarchies isn’t tedious, but it often seems (at least to me) that writing plain text explaining your code is more tedious than writing the code itself. Also, if you consider yourself an Agile practitioner (as I do), you’re always evaluating the time spent vs. value added whenever you write much about working software outside of and separate from the working software you already wrote.
I find myself constantly analyzing and re-analyzing the darn thing. Is it too long? Too short? Too wordy? Oversimplified? Too granular? Too vague? Should I have even attempted this?
I don’t claim this conundrum is unique to me. It’s one of the main reasons that many software projects are undocumented or under-documented. Because of this, a tool that helps you write clear, simple documentation, and saves you time doing it, is an invaluable find.
RAML is one of these fabulous creatures.
I was introduced to RAML by a colleague while working on an ASP.NET Web API project over the the last few months. What I like best about RAML is that it solves multiple documentation problems through the creation and maintenance of a single file:
- You can design and modify your API collaboratively without writing / re-writing any code.
- The syntax is simple and to-the-point, reducing time needed to properly describe the routes, actions and parameters.
- This document serves as a team “contract” of sorts throughout the development process. Server-side developers, client-side / mobile developers, and testers – everyone – can see and agree on the intended behavior and content of your API requests and responses.
- When the project is done, you’re left with an excellent artifact to hand off and / or publish describing exactly how your API works.
Now that’s what I call valuable documentation. To get started, all you really need to write a RAML doc is a text editor and access to the spec. However the team over at MuleSoft has developed a superb browser-based editor for RAML, complete with spec references and “intellisense” of sorts. Best of all – it’s free to use. They’ve labeled it as “API Designer” on their site; here’s a sample (click the image for a better view):
I used it for the project I mentioned above; I found it to be far quicker than writing a text-only version. Being able to click on and view the routes on the right side of the screen was also a really handy feature during various design and testing discussions. I found the API Designer to be a solid tool, and highly recommend it.
So the next time you kick off a REST API project, *start* with the documentation – and write some RAML.
The conversation about business outreach (service first) and infrastructure elasticity (cloud) does not feel complete without including…
Every generation of technology upgrade has created a need for the next upgrade in some ways. User created content and social media initially drove the need for big data techniques. However, the drivers to this movement added up pretty quickly because what big data analysis and prediction can do for business was quickly understood.
A Quick Introduction
Big data is commonly referred to as the mining and analysis of extremely high volumes of data. The data in question is not structured since it is collected from a variety of sources. Many such sources might not be following any standard storage format or schema. The data in question is also described by its characteristics, primarily – volume, verity, velocity, variability and complexity. The techniques for analyzing this data involve algorithms that engage multiple processors and servers that distribute the data to be processed in a logical way. Map-Reduce is one of the many popular algorithms, and Hadoop is one of the popular implementations of Map-Reduce.
Big data techniques are not something that only the corporations that collect social media dust need, it is something that every business needs to look into, sooner than later. It is a separate topic that every business needs to factor in social media data in some form or the other. Even if that part is ignored, the volume of structured data is increasing by the day.
Keeping all that in mind, it should be important to explain how well big data fits into the elasticity of the cloud. Imagine an operation where data needs to be segregated by some specific parameter on different servers. These different servers might run some processing depending of the type of the data or just store that data to improve access time. A true cloud environment will be the perfect host of such an operation. You can spin up new servers, having specific configuration, with just a few lines of scripts at run time.
Where are we heading
In 2011 Google Fiber announced Kansas City to be the first to receive 1 Gigabyte per second internet speed followed by Austin, TX and Provo, UT. As per the company’s announcement in February 2014, Google fiber will be reaching to another 34 cities. AT&T stepped up by announcing that its new service, GigaPower, will provide the gigabyte internet speed to cover as many as 100 municipalities in 25 metropolitan areas. Besides Google and AT&T many other large and smaller providers are working on targeted areas to provide superfast internet speed such as – Cox, Greenlight Network, Canby Telcom, CenturyLink, Sonic.Net etc.
Considering this new scale of data on bandwidth, the way application technology works is going to change, specially the part that involves Mobile and Cloud. It will be much more convenient to have a huge memory and processor centric operation running in a cloud environment, streaming status and results to the browser running on your laptop or a small hand held mobile device.
Moving the heavy lifting work to the cloud and keeping the control on low resource devices is not something that is going to happen. It is happening now; only the scale and outreach is going to increase exponentially. Everyone connected to this field should to pay attention to the changes and keep a strategy for the future, be it providers, consumers, decision makers, technology workers, business users and consultants.
Power BI is Microsoft’s cloud based service that leverages Excel to enable self-service business intelligence. The term Power BI has also been used generically to reference the components and technologies that compromise the Microsoft BI suite of tools. Specifically, PowerPivot, PowerView, PowerQuery, PowerMaps, Question & Answer (Q&A) and now Forecasting. The Q&A and Forecasting features are currently supported only in Office 365 and SharePoint on-line. The other features are fully supported in the desktop (Office Professional Plus) and Office 365 versions of Excel 2013.
The latest incarnation of Office 365 implements time series analysis to provide forecasting capabilities. It is this version and its forecasting capabilities that will be discussed in this article. The description and definition of the specific time series algorithms related to forecasting is beyond the scope of this discussion but, the implications of providing this capability are not.
The methods and techniques for time series analysis are well documented and understood in academia and in the field of statistics, but now this capability is being placed in the hands of the masses that may or may not have a thorough understanding of the associated techniques or how to interpret the results. This may present a change management issue for an organization, but with some planning a great deal of benefits and insights can be obtained that would otherwise not be realized.
From a change management perspective, it is imperative that a consistent approach be defined and implemented to ensure consistent results when developing an analytics solution. This should also include a training program on terminology, techniques, methods and practices.
Let’s take a detailed look at the process that will lead to obtaining useful insights from a forecasting exercise and then how this process applies to an example implemented in Power BI.
- Business Understanding – Understanding from a business perspective of the project objectives, requirements and what the specific outcomes should be. This may also include an initial reference to an analytic methodology or approach (forecasting, classification, etc.).
- Data Understanding – Understanding of the traits/personality (profile) of the data. Are there data quality issues? What are the valid domains of attribute values? Are there obvious patterns?
- Data Preparation – Does the data need to be reformatted? How will missing values be handled? What are the relevant attributes or subsets of data?
- Modeling – Identify potential modeling techniques to meet the requirements of the business solutions and its objectives.
- Evaluation – Evaluate the model and determine its fitness for use. How accurate is the model? Does it address the business requirements? Have new insights been exposed that change the understanding of the data?
- Deployment – Present the model results. Make sure the appropriate visualization is used to present the results. Does the deployment require a simple report, or is a new process required to closed the analytic loop?
This process is depicted in the following diagram:
The above process steps define the CRISPtm data mining methodology which provides an excellent foundational approach and process for development and deployment of predictive analytic and data mining solutions. It has been around for some time, but the basic tenets are very applicable. Let’s now look at an example of how Power BI forecasting can be leveraged and how the process steps are implemented.
The following data represents new and used car sales from 2002-2014. The data is stored by month. Examining the raw data, this is the opportune moment to address business understanding and identification of the business problem and requirements. In this case the business problem is to forecast future new car sales to help better manage inventory. Also, understanding the nature and characteristics of the data should be accomplished at this point. This can be done via data profiling (min, max, null counts, standard deviation, etc.) and through data visualization. It would also help to have a domain expert available to provide additional insights. With regards to data preparation for Power BI Forecasting, there should be an attribute that can be used for time series analysis. In this case, a new attribute is created named [Period Ending] that is a combination of the [Year] and the [Month] represented internally as a date.
The above data was loaded into a PowerPivot workbook and uploaded to Power BI where some visualizations were applied. The line chart shows new car sales units over time. This line chart will be our candidate for time series analysis (forecasting). Note that there appears to be a cyclical pattern in the data. This is a good reason to generate a visualization to provide insights into the nature of the data.
Currently, to perform forecasting, Power BI must be placed in HTML5 mode. This is accomplished via an icon in the lower right corner of the web page. Once that has been done, then hovering over the chart will expose a caret that indicates forecasting may be performed.
Clicking the caret produces a forecast and displays an additional panel that contains adjustable sliders for confidence interval and seasonality. The forecasting algorithm will attempt to detect seasonality and display the calculated cycle in terms of units. The seasonality slider allows for manually setting the number of periods over which cycles will repeat. For example, If based on domain knowledge, it is known that the seasonality is different from what is calculated then it can be adjusted accordingly. This may change the forecasted values. In this case, the seasonality is detected to be 12 units (1 year).
The confidence interval slider displays a shaded area that indicates the number of forecasted values that fall within a specified number of standard deviations. If there is a need to have a very high correlation for forecasts, select one standard deviation. This will also be an indication of how well the forecast model fits the data. The nature and requirements of the business problem and the user will determine an acceptable value for the confidence interval. For this data, 68% of expected values fall within one standard deviation.
There is also the ability to perform a hindcast. A hindcast produces a model that uses historical data to predict future values based on a preceding selected point in time. New predictions are generated that show how the current predictions would look if the prediction was generation at some past point in time.
Prior to this point, the appropriate model would have been selected (time series) and the model applied and evaluated. Within Power BI, the option to select a specific time series model is not available. With regards to model evaluation, adjustment of the confidence interval and hindcasting provides the ability to evaluate the overall fitness of the model.
Finally, the model is deployed and can be used for revaluation. This can be done via exporting the model along with its data to Excel and running it back through the forecasting model again.
It has been demonstrated how Power BI forecasting can be leveraged using the CRISPtm methodology and how advanced analytics can be placed in the hands of the masses. Power BI as a solution is simple to understand, uses existing technologies and is straightforward to implement. Over time, more and more advanced analytic capabilities will be exposed to the masses and to be successful, a well defined process, approach and appropriate training must be used to ensure that proper results and insights are obtained.
Questions and comments can be addressed directly to:
Director, Data Management – Strategist