MRP

Protected Flow Manufacturing: A New Approach To Production Planning and Execution

 Overcoming the Limitations of MRP and Finite Scheduling

Back in the day, Material Requirements Planning (MRP) was a game changer. In taking a combination of actual and forecasted demand, cascading it through multiple levels of bills of material, netting exploded demand against existing inventory and planned receipts, it was able to create a plan that included purchase orders, shop orders and reschedule messages. Given these bills of material can be many layers deep and encompass hundreds or even thousands of component parts and subassemblies, without automated MRP there is simply too much data and complexity for a human to possibly work with.

Yet, while MRP was able to replace other archaic, clumsy and inaccurate planning methodologies, it has always had its limitations. Because MRP only planned for materials, it ignored labor and equipment resources and assumed infinite capacity. Finite scheduling helped but both were slow and static (and often clumsy), while the pace of business accelerated and change became the only constant. The harsh reality is: Even today, production planning and execution are still largely dependent on spreadsheets, hand-written schedule boards and the ubiquitous daily production meeting, leading many to desperately think, “There has to be a better way!”

That is exactly what MRP veteran Richard T. (Dick) Lilly thought. As an early pioneer, working alongside Ollie Wight, he helped make MRP the game-changer that it was in the 1960’s. He later went on to become the founder of three successful software companies, including Lilly Software Associates where he obtained a United States Patent (5,787,000) for his concurrent (finite) scheduler. He later sold that business to Infor, but he still continued to search for “a better way.” And he found it.

It’s called Protected Flow Manufacturing, a new methodology that simplifies planning and execution. Protected Flow Manufacturing prevents premature release of work, reduces time jobs spend waiting, protects promise dates and provides a clear priority to each operation, without complicated finite scheduling. It accurately predicts when each job will arrive at a specific work center (resource), monitors risk and makes the decision about what to work on next dead simple.

 What’s Wrong With MRP?

The introduction of packaged Material Requirements Planning (MRP) software for the masses (of discrete manufacturers) back in the late 1970’s was transformational, although nobody really called it that back then. “Transformative” innovation is very much a 21st century term. But MRP truly was game-changing back in the day.

While the concept dates back to the 1950’s, for years afterwards, many struggled to apply the methodology. Although the concept was simple enough, bills of material could be many layers deep and encompass hundreds or even thousands of component parts and subassemblies. Without software to automate MRP there is simply too much data and complexity for a human to possibly keep track of. Adding to the dilemma in the period from the 1950’s through the 1970’s, the concept of packaged software solutions was scoffed at. The prevailing sentiment was that (of course) everyone is different and needs a custom-designed system. This left automated MRP systems available only to those with large information technology (IT) staffs capable of developing their own custom versions of MRP.

That started to change in the late 1970’s when packaged applications made an entrance, not just on massive mainframes, but on “mini-computers” as well. But MRP didn’t turn out to be the savior the experts expected it to be. Why not?

Infinite Capacity Throughout the Black Hole of Production

First of all, MRP assumes infinite capacity and “trusts” production run times and supplier lead times implicitly. These assumptions proved to be troublesome for some and a fatal flaw for others. First of all, lead times are treated as constants, even though they can be quite variable. Even when the lead-time of a manufactured product is calculated from setup and run times, it can be inaccurate because of the added lead-time component of wait (or queue) time. MRP treats this as a constant as well when in fact it is anything but.

MRP took a demand due date and backed off the lead-time to give you a release date for production orders. It didn’t really concern itself with what happened in between those two dates. It was up to you to figure that out. Most manufacturers used backward scheduling for the individual operations…again ignoring capacity. Capacity requirements planning (CRP) modules were used to highlight trouble spots, but didn’t offer much else.

Some might argue that finite scheduling is the answer. But the reality is: Finite schedulers are beyond the reach of many companies, require a lot of work and assume standards are more accurate than they typically are. And even if those setup, run and wait times are an accurate measure of the average time, they are just that – an average. Finite schedulers treat them as a constant, when again, they can often be quite variable.

Finite schedulers must determine relative priorities of tasks and tend to do so on an order-by-order basis. Traditionally finite scheduling assumes jobs have the same relative priority throughout all operations. But that’s not necessarily the case, sometimes sacrificing efficiency and due dates unnecessarily in order to preserve that priority.

Speed and Complexity

And then there is the issue of speed and complexity. It was not unusual for early MRP runs to take a full weekend to process, and during that time nobody could be touching the data. This didn’t work so well in 24X7 operations or where operations spanned multiple time zones. Of course over time, this was enhanced so that most MRPs today run faster and can operate on replicated data, so that operations can continue. But that only means it might be out of date even before it completes.

And MRP never creates a perfect plan. So while most planners were relieved of the burden of crunching the numbers, they were also burdened with lots of exceptions and expedited orders.

Human Nature: It’s a Trust Issue

And finally, there is human nature. MRP required a paradigm shift and the planning process executed by MRP is complex. Not everyone intuitively understands it. While MRP is not rocket science it is hard to rewind, step through and “see” all that is going on. And if planners and schedulers, or even operators don’t really understand it, they are unwilling to relinquish control, hence the end-runs and work-arounds with spreadsheets, scheduling boards and meetings.

It’s basically a trust issue. Without complete and implicit trust, it’s just human nature to pad standards to create a buffer, allowing for disruptions along the way and Murphy’s Law (if something can go wrong, it will). As these estimates (vendor lead times, production and wait times) get inflated, performance might look good on paper, but in reality it declines along with productivity and utilization.

Yes, MRP brought a new dimension to material planning. But has it really helped manage the execution of the plan? No. Some might even argue it was never intended to. It might help get the materials to supply the production process on time, maybe even just in time. It can tell you when to release an order and when to complete it. But it does little to help in between, which is where the real execution happens.

Yet through the past three decades, planning and execution hasn’t changed all that much. Yes MRP has gotten faster. Yes, there are viable finite schedulers on the market. But in general, solution providers have primarily addressed issues by throwing technology at solutions, assuming the functionality was perfected decades ago.

Next generation solutions add speed. They are moving to the cloud, becoming accessible through mobile devices and are perhaps even enhanced with analytics. But little has been done to improve the methodology or the functionality. Protected Flow Manufacturing is the first completely new approach to production planning and execution in decades.

It’s Time for a Fresh Approach

Recognizing all these limitations, Mr. Lilly and his associates formed a new company called LillyWorks, and set about re-evaluating how MRP and other scheduling tools were implemented in the real world. In doing so, the group challenged assumptions that were made decades ago, but were somehow never revisited. This resulted in a new concept they call Protected Flow Manufacturing.

The concept is based on Little’s Law. Since few manufacturing folks are interested in queuing theory, suffice to say it is based on the same theory we all intuitively employ in our daily lives. When you walk into a bank (or store, or registry of motor vehicles), the fewer people there, the less time you wait.

Applying that same reasoning to a work center or piece of equipment, the less work you bring out to the shop floor, the less time jobs wait between operations. And you know that wait (or queue) time is the reason why it takes you four weeks to complete a job even though run and setup times add up to a single week.

So Protected Flow Manufacturing prevents the premature release of work. You might think you are being smart in starting early, in order to allow yourself some extra time, but doing so can result in unintended consequences that have a negative impact on this and other jobs. Everyone would agree releasing a job too late is bad. But releasing it too early can be equally bad. That implies there is a “right” time to release it. And Protected Flow Manufacturing will calculate that.

In doing so, Protected Flow Manufacturing uses setup, run and move time to calculate the “operating time,” but ignores queue times at individual work centers or operations.

In a perfect world, with nothing else competing for resources, this operating time is how long it would take you to manufacture the product. But of course we don’t live in a perfect world and of course you don’t simply work on one job at a time. You have different orders competing for the resources on the shop floor.

Therefore we have to budget in some protection. But even though traditional queue time is defined as a constant, it is indeed quite variable. Trying to predict that variability at the level of granularity of each operation is complicated, maybe even impossible. But you know generally how long it takes to get something all the way through the process. Maybe that is four weeks. And you know the operating time. Let’s say that is one week. That means you are actively working the job for a week and it spends another three weeks waiting. So you have a 3:1 ratio of buffer time to operating time and in this case, a buffer of three weeks.

So that’s exactly what Protected Flow Manufacturing has you do: Define the ratio of work to buffer time for the entire work order, or maybe even a category of work orders. If you have a 3:1 ratio today and are largely hitting your due dates, maybe you set it at 2.5:1. See how it works for you. Chances are you will find yourself whittling that down over time as you build more confidence in the system.

Sounds simple enough, but your next question might be, “Without queue times for each operation, how do I schedule the order?” The answer is, you don’t. Not in the traditional “order by order” sense. Instead, you predict what will happen at each of the resources (work center, machine, etc.) at future moments in time. After the work order has been released, that’s actually where the decisions must be made. With multiple jobs sitting in the queue, which should the operator work on next? Protected Flow Manufacturing provides a clear priority to each operation without finite scheduling.

Once these decisions are made, you find you actually have an implicit schedule for each work order, step by step, indicating when the work will arrive at the resource and when the operation will start. If you follow the rules, you will be able to predict when the job will be completed. And along the way, you can assess the risk of missing that due date. But Protected Flow Manufacturing is designed to minimize that risk and protect promised dates.

Sounds like a lot of work? Perhaps if you had to do this manually, but of course all these predictions can be automated, just like MRP was automated.

Here’s How It Works

Protected Flow Manufacturing calculates when each operation of each job will start and finish, based on

  • Resource capacity
  • Estimated setup and run times (and move times if applicable)
  • A defined buffer to operating time ratio
  • Other jobs also waiting for the same resources
  • Material availability (which may include lead time for ordering additional material)

Protected Flow Manufacturing starts with a due date for the job. It then calculates the operating time from the setup, run and move times for each operation and adds a buffer based on the ratio you define. Let’s say you have Job A where you will spend 3 days working and you have 12 days of buffer (a ratio of 4:1). Protected Flow Manufacturing would make the order available to be worked on 15 days prior to the due date, and not before.

At that point you could start working on the first operation, but obviously only if the resource is available. It might sit waiting for that resource. No work is being done, but some of the buffer is being eaten up. When released, it has 100% of its buffer left. After two days of no activity, it has 83.3% of its buffer remaining. If there is another job (Job B) waiting for that resource, when it is projected to free up, Protected Flow Manufacturing says you should work on the one with the smallest percentage of its buffer remaining. If Job B has less than 83.3% of its buffer remaining, it goes first. Meanwhile more buffer will get eaten away on Job A until it is the job projected to have the smallest percentage of its buffer remaining at the next (future) moment in time when you would need to decide “what’s next?”

In order to accurately predict outcomes, Protected Flow Manufacturing travels forward in time to these future moments. That might be when capacity will become available (an operation is predicted to finish) or an operation is predicted to arrive at a resource. Protected Flow Manufacturing then answers the question of “what’s next?” based not on the conditions of the present moment, but on conditions projected for that future moment in time. It then “loads” that job.

Ultimately Protected Flow Manufacturing loads all work left to be scheduled and completed. With all the parameters established (capacity, operating times and buffer ratios), this can all be automated. Operators simply need to follow the rules and suddenly planners/schedulers can turn their attention to improving processes rather than figuring all this out and then fighting fires when the best laid plans go astray.

What About Materials?

The accuracy of predicted start and completion of operations is predicated on the needed materials being available. So even if the resource has the required capacity available when the operation is predicted to arrive, Protected Flow Manufacturing will not load the job unless the materials are available. How does it do that without MRP? It links them directly to the operation. Of course it can “see” existing inventory and looking out into the future, it can determine from scheduled receipts when additional material is due to arrive. If there are no (or insufficient) scheduled receipts, it uses the lead-time.

This is fairly intuitive for simplistic, single level bills of material (BOMs), but unless you are running a simple repetitive manufacturing process, seldom do you have the luxury of a single-level or a simple linear and sequential process. If you do, you are probably using much simpler methodologies than MRP.

In a more complex environment, perhaps in a job shop, there is a lot more involved than just a multi-level BOM. You might have multiple processes overlapping or running in parallel, perhaps making subassemblies or semi-finished goods, then converging in a final stage. Or perhaps you start with a common process and then diverge. Think of cutting a piece of sheet metal and then sending different cuts to different work orders.

So instead of relying on a multi-level BOM, the Protected Flow Manufacturing concept assumes a multi-level work order, where the interdependencies are not just implied, but defined specifically. This might add a level of complexity to your operation, but it also makes it a lot more like the real world.

This approach also addresses an additional limitation of MRP. MRP assumes a shop order for each level in the BOM. Even a very simple product assembled from two manufactured items requires three shop orders: one for each of the components and a third to assemble them. Also, MRP requires you to receive these components back into stock even if you never keep an inventory of them on hand. You might record the receipt to inventory and then immediately issue them back out to the shop floor. Oftentimes this is a paper-only transaction that never really occurs, which creates extra, unnecessary transactions.

Also, each shop order has its own due date. Yet the due dates of the shop orders making the components are not directly connected to the final assembly order. So when the due date from the customer changes, someone has to remember to go back and adjust the due dates for the shop orders making the manufactured components. These limitations are most troublesome in a job shop environment where work is driven by actual orders for non-standard products.

Protected Flow Manufacturing’s multi-level work order approach eliminates these problems. So, does this mean Protected Flow Manufacturing is applicable only to job shops where material is purchased directly for individual jobs and nothing is ever made to stock? No, it just means it needs to accommodate stock orders for both purchased and manufactured parts.

Bonus: Rush Orders Become Self-Expediting

With the automation of MRP, planners/schedulers really became expediters. MRP came up with a plan, but no plan is ever perfect and neither is supplier or shop floor performance. Capacity is proven to not be infinite. Due-dates change. Suppliers miss scheduled deliveries. And of course a rush order trumps all other exceptions. So how does Protected Flow eliminate expediting of a rush order?

Remember Job A? It had 3 days of work and 12 days of buffer. So we released it 15 days before it was due. What if all of a sudden you only have 10 days to complete a similar job? When you release the job with the rush due date, it has already lost 5 days of its buffer. So it hits the first operation and instead of having 100% of its buffer remaining, it only has 58.3% of its buffer remaining. It will automatically get prioritized ahead of those released with the full buffer, with absolutely no manual intervention.

Conclusion

Protected Flow Manufacturing obviously takes a new and novel approach to execution. But is it better? It is definitely a lot more simple and easy to understand than MRP and finite schedulers. It addresses the real-life challenges job shops have perennially faced and reflects the realities of actual operations, whether you are operating in the mode of make-to-stock, make-to-order or somewhere in between.

It is “better” because it makes predictions that respect the reality that job priorities change over time, including the ripple effect to upstream and downstream operations. It reflects what is likely to occur in the future when workers perform according to the priorities that it calculates for them, while also acknowledging limited capacity resources. And it enhances the material plan with a production schedule that can be trusted and executed simply by following the rules.

But the proof is in the execution. Interested in how Mr. Lilly and his associates at LillyWorks have incorporated Protected Flow Manufacturing concepts into a new solution? Click here to learn more.

Tagged , , , , , ,

SAP Business Suite on HANA: Changing the Conversation

It’s Not About the Technology, It’s About the Business

I recently got an update on SAP Business Suite on HANA from Jeff Woods, former industry analyst, currently Suite on HANA aficionado at SAP. Jeff had lots of good stuff to share, including some progress to date:

  • 800+ Suite on HANA contracts have been signed
  • 7,600+ partners have been trained
  • There are 200+ Suite on HANA projects underway
  • 55 of these projects have gone live (and the number is growing)
  • The largest ERP on HANA system supports 100,000 users

So the Suite on HANA is quite real. But the single message that resonated the most strongly with me: the conversation has (finally) changed. While we’ve been hearing about HANA as this wonderful new technology for several years now, for the most part, the talk was about technology and even when the technologists spoke about purported business value, they spoke in very technical terms. But the audience I write for, business leaders in various industries, don’t care about technology for technology sake. Many don’t (care to) understand tech-speak. But they do care about what technology can do for them.

A Year Later…

It was just about a year ago that SAP announced the availability of SAP Business Suite powered by HANA, complete with live and live-streamed press conferences in both New York City and Waldorf, Germany. I don’t think I have ever seen such genuine excitement from SAP folks as was displayed in this announcement, and yet the “influencers” in the audience were a bit more subdued. A year ago I attributed this to the fact that these same influencers tend to be a quite jaded bunch, hard to impress. We had also been hearing about HANA for a few years already. There wasn’t a “newness” or game-changing feel about the announcement. But impressing the influencers is only one step towards the real goal of engaging with prospects and customers.

A year ago I also wrote, “SAP is trying hard to change the conversation to be less about the technology and more about the business value.  What is the real value? In the words of one early adopter: HANA solves problems that were deemed unsolvable in the past.” But uncovering those previously unsolvable problems required some visionary thinking.  Tech-speak is not going to get the attention of the guy (or gal) that signs the check or spur that kind of thinking. And a year ago the conversation hadn’t changed. Just look at how the vision of HANA was portrayed:

  • All active data must be in memory, ridding the world of the “rusty spinning disk”
  • Full exploitation of massively parallel processing (MPP) in order to efficiently support more users
  • The same database used for online transaction processing (OLTP) and analytics, eliminating the need for a data warehouse as a reporting tool for OLTP to support live conversations rather than “prefabricated briefing books”
  • Radically simplified data models
  • Aggressive use of math
  • Use of design thinking throughout the model

Look carefully at those words. They mean nothing to the non-technical business executive. Sure, those words got the attention of some forward thinking CIO’s, and that was enough to kick start the early projects, projects that produced amazing results. But that’s as far as the message got. And even when the message was not articulated in technical terms, it was presented at too high a level of abstraction. Business executives faced with important decisions don’t think in terms of “becoming a real-time business.” Operational managers don’t seek out “transformative innovation without disruption.” They want to get through the day most effectively and efficiently and make the right decisions.

Asking the Right Questions Today

So how do you change the conversation? By asking a different kind of question. Because “faster” is universally accepted as a good thing, in the beginning the HANA conversation might have been kicked off with the question to the CIO: What processes are running too slowly today? But in talking to the business user, you need a different approach. SAP’s “cue card” below is a good start. You are now seeing conversation starters that make more sense to the business leader. Take the time now to read them carefully. If you are a business leader, they will resonate much more than discussions of MPP and column-oriented databases or even speed of processes. I especially like the business practice questions in the rightmost column.

Cue card

Source: SAP

But if I were sitting across the table from a business leader, I might ask questions that are even more direct and down-to-earth. For example:

  • Describe a situation where you have to hang up the phone, dig deeper and get back to your customer or prospect later. (By the way Jeff’s thought was that by hanging up you only encourage them to pick up the phone and call your competitor.)
  • What summary data do you get today that consistently requires more detail before you make a decision? Can you get at that data immediately (no delays) and easily (no hunting around)?
  • What level of granularity are you forecasting revenue? Is it sufficiently detailed? Are you forecasting by region or maybe by product line when you would love to be able to forecast by territory, individual customer and individual product combined?
  • Are there decisions that require you to consult with others? How much time does this add to the decision-making process? How easy or hard is it to keep track of who to contact? How quickly can you make contact? Quickly enough?

The goal really is to improve the business not only in small linear steps, but also to increase speed of decision and therefore efficiency exponentially. The first step is to provide new ways of engaging with the system, which means changing the user experience. But to change the game, you need to make improvements to the process itself. SAP’s new Fiori applications are a good example of this progression.

 Fiori: More Than Just a Pretty Face

Last spring, SAP announced SAP Fiori, a collection of 25 apps that would surround the Business Suite, providing a new user experience for the most commonly used business functions of ERP. While useful in pleasing existing users and perhaps even attracting new users within the enterprise, this first set of apps just changed the user interface and did not add any significant new functionality.

The latest installment has 190+ apps supporting a variety of roles in lines of business including human resources (HR), finance, manufacturing, procurement and sales, providing enhanced user productivity and personalization capabilities. The apps offer users the ability to conduct transactions, get insight and take action, and view “factsheets” and contextual information. The next round of Fiori apps are expected to add even more new capabilities, thereby taking them to the next level in changing the game.

The MRP cockpit is an example of this next generation Fiori app and a perfect illustration of how these new apps can recreate processes, even ones that are 30 years old. If you “know” manufacturing, you probably also know that the introduction of Material Requirements Planning (MRP) software back in the late 70’s was transformational, although nobody really called it that back then. “Transformative” innovation is very much a 21st century term. But it truly was game-changing back in the day.

Last year, even before the conversation had shifted, I saw the parallels between the potential for HANA and the automation of the planning process that MRP brought about. Today the MRP cockpit delivers on that potential.

For those outside the world of manufacturing, in a nutshell, MRP takes a combination of actual and forecasted demand and cascades it through bills of material, netting exploded demand against existing inventory and planned receipts. The result is a plan that includes the release of purchase orders and shop orders and reschedule messages. While the concept might be simple enough, these bills of material could be many layers deep and encompass hundreds or even thousands of component parts and subassemblies. Forecasts are educated guesses and actual demand can fluctuate from day to day. Without automated MRP there is simply too much data and complexity for a human to possibly work with.

As a result, prior to MRP, other ways of managing inventory became commonplace. You had simple reorder points. Once inventory got below a certain point, you bought some more, whether you actually needed it or not. You also had safety stock as a buffer, and the “two bin” system was quite prevalent. When one bin was empty, you switched to the other and ordered more. These simplistic methods may have been effective in some environments, but the net result was the risk of inflated inventory while still experiencing stock outs. You had lots of inventory, just not what the customer wanted, when it wanted it. And planners and schedulers still had to figure out when to start production and they knew enough to build a lot of slack time into the schedule. So lead times also became inflated and customer request dates were in jeopardy.

Once MRP entered the picture, these were seen as archaic and imprecise planning methods. Even so, most didn’t rush right out and invest in MRP when it was first introduced. In fact now, decades later, the adoption rates of MRP in manufacturing still sits at about 78%. Why? The existing practices were deemed “good enough” and, after all, that’s the way it had always been done.

It required a paradigm shift to understand the potential of MRP and the planning process executed by MRP was complex. Not everyone intuitively understood it. And if they didn’t really understand, planners were unwilling to relinquish control. Particularly since MRP runs were notoriously slow.

It was not unusual for early MRP runs to take a full weekend to process, and during that time nobody could be touching the data. This didn’t work so well in 24X7 operations or where operations spanned multiple time zones. Of course over time, this was enhanced so that most MRPs today run faster and can operate on replicated data, so that operations can continue. But that only means it might be out of date even before it completes. And MRP never creates a perfect plan. It assumes infinite capacity and “trusts” production run times and supplier lead times implicitly. So while most planners were relieved of the burden of crunching the numbers, they were also burdened with lots of exceptions and expedited orders.

Yet over time, MRP brought a new dimension to material planning. It brought a level of accuracy previously unheard of and helped get inventory and lead times in check. Manufacturers have experienced an average of 10% to 20% reduction in inventory and similar improvements in complete and on-time delivery as a result of implementing MRP.

But through the past three decades, MRP hasn’t changed all that much. Yes it has improved and gotten faster, but it hasn’t changed the game because it still involves batch runs, replicated data and manual intervention to resolve those exceptions and expedite orders. Now with HANA we’re not talking about speeding up the processes by 10% to 20% but by several orders of magnitude, allowing them to run in real time, as often as necessary. But if it was just about speed, we might have seen this problem solved years ago.

You probably don’t remember Carp Systems International or Monenco, both Canadian firms that offered “fast MRP”. Carp was founded in 1984, and released a product in 1990 bringing MRP processing times from tens of hours down to 10 minutes. It ran on IBM’s RS6000 (a family of RISC-based UNIX servers, workstations and supercomputers). But it was both complex and expensive for its time ranging in price from $150,000 to $1 million). Not only was it expensive and required special servers, in order it to work it needed to replicate the data and then apply sophisticated algorithms.

About the same time Monenco introduced FastMRP, also a simulation tool, but one that ran on a personal computer. While it cost much less than Carp’s product, it was also less powerful and had significantly fewer features.

You won’t find either of these products on the market today. If speed was all that was required they would have survived and thrived. In order to change the game, you also need to change the process, which is exactly what SAP intends with its new Fiori app for MRP.

The new MRP cockpit includes new capabilities, like the ability to:

  • View inventory position looking across multiple plants
  • Analyze component requirements with real-time analytics
  • Perform long term MRP simulations
  • Analyze capacity requirements and suggest alternatives

But this too requires a paradigm shift. Manufacturers, as well as other types of companies, are quite accustomed to making decisions from a snapshot of data, usually in report format, possibly through spreadsheets. They have become desensitized to the fact that this snapshot is just that, a picture of the data, frozen in time.

What if you never had to run another report? Instead, whenever you needed a piece of data or an answer to a question, you had immediate and direct access, not to the data as it was at the beginning of the day, or the end of last week, but to the latest data in real time? Not only will decision-makers need to adjust to thinking in real-time, but will also have to trust the software to automate much of the thinking for them. Will they be able to sit back and let the software iterate through multiple simulations in order to find the best answer to an exception even before it is reported as an exception? I suspect they will if it is fast enough. And HANA is now delivering at speeds that just a few years ago would have been impossible. But with these speeds accelerating by orders of magnitude, the ability to communicate and collaborate effectively must also accelerate.

Making the Human Connection

It is not enough to change the way users engage with the software, it is also necessary to change the way they engage with other people. How often do you or your employees today express sentiments like:

  • If I just knew who to contact for approval/help….
  • I don’t know what to ask
  • I wish I could check with (several) people on this quickly

What if the software could help? As work flows are streamlined, automated and accelerated, so must the lines of communication and potential collaboration. Whether employees are looking to move a process forward, resolve an issue or mature an idea faster, lack of communication and clumsy modes of collaboration can inhibit the game-changing effect of the technology. Which is why SAP has upped its game in the area of Human Capital Management and social collaboration tools. It took a significant step forward with the acquisition of SuccessFactors and JAM and has been blending these capabilities with the HANA platform.

Key Takeaways

Nobody today would disagree that the SAP Business Suite, powered by HANA combines deep and rich functionality with powerful technology. But can it be game changing in terms of how businesses operate? The potential certainly exists, but it’s not just about speed. Changing the game means changing the way we’ve been doing things for decades. Before we can change the process, we need to change the conversation. Are you looking to optimize business processes? Are you ready to talk?

Tagged , , , , , , , , , , , , , ,

ERP, The Next Generation: The Final Frontier? Part 1

Turning Your Business Into a Starship Enterprise

As the latest movie of the Star Trek franchise comes to a theater near you, let’s go out on a limb here and draw some parallels between Enterprise Resource Planning (ERP) and this entertainment phenomenon that began in 1966 by chronicling the interstellar adventures of the fictitious starship Enterprise. Like the USS Enterprise, whose five-year mission it was to explore new worlds and “to boldly go where no man has gone before,” early versions of ERP charted new territory for enterprise applications. It evolved from MRP (material requirements planning) to MRP II (manufacturing resource planning) and then boldly set out to conquer the “final frontier” of ERP, managing not a small piece of the enterprise, but the enterprise itself. And like the Star Trek franchise, after playing on both large and small screens for more than two decades, a “next generation” was born: faster, more technologically enabled and more in tune with the evolving needs of the galaxy.

This is the first post of a series on Next Generation ERP that will be unfolding over the next few weeks addressing the questions: Are you evolving with it as this next generation of ERP continues to evolve? Or are you stuck in the darkness of the 20th century?

The first several parts will be excerpts from a Mint Jutras paper of this same name, to be followed by individual posts about specific vendors. The inclusion or exclusion of a vendor in this series will be largely based on the relationship I have with the vendors and therefore how deeply and thoroughly I have been briefed on their solution(s). At this point I have no intention to insure that every major vendor is represented and the order in which they are presented has no particular significance. In fact I have already posted two of these:

But before you click on these links, read on for an introduction of our Star Trek themed series.

Star Trek: The Series, The Movie, The Software

Like the voyages of Star Trek that tested the nerves of the captain and crew of the USS Enterprise, ERP has often been an adventure, testing the nerves of CIOs and line of business executives at the helm of the enterprise. As the USS Enterprise explored the far reaches of the galaxy, it encountered alien cultures and new and different life forms. Traditional means of communication and familiar methods of interaction became ineffective. As businesses began routinely expanding beyond international boundaries, distances increased by orders of magnitude and they too experienced new cultures, new languages, new regulatory and reporting requirements and new ways of doing business.

The USS Enterprise had at its disposal amazing technology that allowed the starship to change course and even reverse direction immediately. It could travel at warp speed, using a hypothetical faster-than-light propulsion system. Star Trek was, and still is science fiction. In contrast, next generation, technology-enabled ERP solutions are very real. They help us cope with the accelerating pace of business, growing volumes of data and higher customer expectations. Yet, few can turn on a dime and unlike Star Trek’s USS Enterprise, ERP can’t operate at warp speed. Or can it? We are now entering a new phase of ERP’s evolution. New in-memory databases and technology are now dramatically speeding up run times and eliminating the need for batch processes.

But few are taking advantage of this new technology. The entire gamut of different generations of MRP and ERP are still in operation across the planet today, producing a wide range of value from very low to very high.  To many, modern technology-enabled solutions might still seem the stuff of science fiction when in fact they are in production environments, producing results that are nothing short of amazing. What generation of ERP are you running today? Have you explored the world of very real possibilities recently? If not, are you missing out and losing ground in terms of competitive advantage?

ERP solution providers: If you are interested in obtaining more information on this series and briefing us on how your solution is “next generation,” please contact Lisa Lincoln (lisa@mintjutras.com).

Tagged , , , , , , ,