31 Ocak 2017 Salı

Article: Data Mining Case Study



Introduction
Constantly evolving technology has made it easier to keep the data. In situation some methods have been developed for the analysis of data. The science that collects these methods together is called data mining. In this article we will examine Data mining and its algorithms. Our goal is to talk about data mining and algorithms in generally. Then we will examine the clustering algorithm from the algorithms, In which areas it is applied and for what purpose it is used. After these phase, we will focus on k-means from clustering algorithms and methodology of that. In the next step, we will have research question, we analyze data properties after we will solve in the WEKA application.


What is Data Mining?
Technology rapidly has increased, so the ease of accessing information. It is easy to store information as easily as it is easy to access the information,So the data is rapidy growing and there are a lot data in everywhere. This condition has to be processed and analyzed. Many data analysis algorithms are called data mining, which is gathered under one roof. So data mining is a process of extracting useful information from a heap. In conclusion, data mining is the process of useful information from extracting large data by solving a number of algorithms.

Data mining is a combination of many disciplines. These disciplines: database systems, statistics, machine learning, and pattern recognition. The algebraic, geometric, and probabilistic viewpoints of data play a key role in data mining. Given a dataset of n points in a d-dimensional space, the fundamental analysis and mining tasks covered in this book include exploratory data analysis, frequent pattern discovery, data clustering, and classification models, which are described next.(DATA MINING AND ANALYSIS Fundamental Concepts and Algorithms, MOHAMMED J. ZAKI and WAGNER MEIRA JR,Cambridge Universiy, Pg: 26).

Data mining mostly used  in  Market Analysis and Management , Corporate Analysis & Risk Management , Fraud Detection.  The main problems are finding target customer type, forecasting for future, and prevent. Also data mining can be used in many ways in everyday life. Some of these can be listed as follows:
·         Evaluation of treatment claims made to hospitals according to time, place and need will be helpful in the initial stage of epidemic risk assessment, control and resource planning.

·         A model that identifies the profiles of users of fugitive energy will allow effective fighting with fugitives at low cost, which will allow them to predict potential fugitive energy users.

·         A study aimed at predicting the intensity of highways by region and time will ensure that, for example, accident rates are minimized by correct resource planning at the right time.

·         When implementing public support schemes, the success of programs implemented through institutional risk scoring increases the amount of support to be given to organizations with the right amount and right goals. Reducing the amount of bad loans is the fact that the profits that are the risk of not paying when allocating credits have been identified.


Generally used data mining applications can be listed as follows: Marketing Banking Retailing and sales, Manufacturing and production, Brokerage and securities trading, Government and defense , Computer hardware and software, Airlines, Health care ,Broadcasting,  Homeland security, Insurance Police (Week4Presantation, Keziban Seçkin, AYBU, pg: 12)


If we look at the algorithms of data mining in generally, algoritms of classification,most common used  naive bayes and decision tree, algoritms of clustring algoritmları, most common used Hierarchy Clustering and K-means, Association Rules (Apriori Algorithm), Text Mining, Web Mining.

Classification used to analyze the historical data stored in a database and to automatically generate a model that can predict future behavior. Its objective is to find a derived model that describes and distinguishes data classes or concepts. (Week4Presantation, Keziban Seçkin, AYBU, pg: 8)
·         Naive Bayes based on classification so we need training data,classification algorithm building classifier or model and test data to estimate the accuracy of classification rules (supervised learning). A learned model can be used to make predictions.


·         Decision Tree A decision tree is a structure that includes a root node, branches, and leaf nodes. Each internal node denotes a test on an attribute, each branch denotes the outcome of a test, and each leaf node holds a class label. The topmost node in the tree is the root node.(Data Mining Tuturiol, tutoroils points, pg: 31)
Clustring Partitioning a database into segments in which the members of a segment share similar qualities. After we will examine this issue in detail.
Association A category of data mining algorithm that establishes relationships about items that occur together in a given record
Text mining Application of data mining to non-structured or less structured text files. It entails the generation of meaningful numerical indices from the unstructured text and then processing these indices using various data mining algorithms. That is, by scanning an existing text (looking at the number of words from which the word is spoken, looking at what the frequency range is, and repeating it), it yields a meaningful result.
Web mining The discovery and analysis of interesting and useful information from the Web, about the Web, and usually through Web-based tools (Week4Presantation, Keziban Seçkin, AYBU)

ALGORİTM  OF CLUSTRING
What is the Clustring

Classes emerging by keeping similar data close together are called clustering. Classification is used mostly as a supervised learning method, clustering for unsupervised learning. The logic of the cluster is like this: Increase intracellar similarity as much as possible, the difference between the clusters is as much as high. The clustering algorithm is the oldest algorithm.Clustring can use many areas for example we think that the records of the products that are received by a customer in a business. From these data sets, clustering algorithms can provide useful information. For example, how many produce which size and pieces the shirt  (small, medium, large eg.)

If we will give the definition of the cluster academically: Clustering is a standard procedure in multivariate data analysis. It is designed to explore an inherent natural structure of the data objects, where objects in the same cluster are as similar as possible and objects in different clusters are as dissimilar as possible

Clustering is an exploratory data analysis. Therefore, the explorer might have no or little information about the parameters of the resulting cluster analysis. In typical uses of clustering the goal is to determine all of the following: The number of clusters, The absolute and relative positions of the clusters, The size of the clusters, The shape of the clusters, The density of the clusters.
The cluster properties are explored in the process of the cluster analysis, which can be split into the following steps.
1. Definition of objects: Which are the objects for the cluster analysis?
2. Definition of clustering purpose: What is the interest in clustering the objects?
3. Definition of features: Which are the features that describe the objects?
4. Definition of similarity measure: How can the objects be compared?
5. Definition of clustering algorithm: Which algorithm is suitable for clustering the data?
6. Definition of cluster quality: How good is the clustering result? What is the interpretation? ( Clustering Algorithms and Evaluations, pg: 180).
Application of Cluster Analysis
Clustering analysis is mostly used in many area  such as market research, pattern recognition, data analysis, and image processing.
Clustering can also help marketers find different target groups in their customer base. And using these application they can describe their customer groups based on the purchasing sample.
In the field of biology, it can be used to derive plant and animal taxonomies, categorize genes with similar functions and gain insight into structures inherent to populations.
Clustering also helps in description of areas of similar land use in an earth observation database. It also helps in the description of groups of houses in a city according to house type, value, and geographic location.
Clustering also helps in classifying texts on the web for information exploration.
Clustering is also used in outlier detection applications such as detection of credit card fraud. (Fraud Detection)

Requirements of Clustering in Data Mining
The following statements throw light on why clustering is required in data mining: 
Scalability - We need extremely scalable clustering algorithms to agreement with wide databases.
Ability to deal with different kinds of attributes - Algorithms should be able to be operative on any kind of data such as interval-based (numerical) data, categorical, and binary data.
Discovery of clusters with attribute shape - The clustering algorithm should be capable of detecting clusters of arbitrary shape. They should not be bounded to only distance measures that tend to find spherical cluster of small sizes.
High dimensionality - The clustering algorithm should not only be able to handle low-dimensional data but also the high dimensional space.
 Ability to deal with noisy data - Databases contain noisy, missing or erroneous data. Some algorithms are sensitive to such data and may lead to poor quality clusters.
Interpretability - The clustering results should be interpretable, comprehensible, and usable.

Algoritm of Cluster
In this section we describe the most well-known clustering algorithms. The main reason for having many clustering methods is the fact that the notion of “cluster” is not precisely defined (Estivill-Castro, 2000). Consequently many clustering methods have been developed, each of which uses a different induction principle. Farley and Raftery (1998) suggest dividing the clustering methods into two main groups: hierarchical and partitioning methods. Han and Kamber (2001) suggest categorizing the methods into additional three main categories: density-based methods, model-based clustering and gridbased methods. An alternative categorization based on the induction principle of the various clustering methods is presented in (Estivill-Castro, 2000).
We talk  a little bit about Hierarchical Methods After that we focus K-means algoritm.
Hierarchical Methods
Hierarchical clustering involves creating clusters that have a predetermined ordering from top to bottom.
Applications such as;
      the discovery of different customer groups in the grocery and the emergence of shopping patterns of these groups,
      the classification of similar genes according to plant and animal classifications and functions in biology,
      the classification of houses according to types, values and geographical places in city planning are typical clustering applications.
      At the same time as clustering is used to classify documents for information discovery on the Internet

Summary of Hierarchal Clustering Methods
• No need to specify the number of clusters in advance.
 • Hierarchical structure maps nicely onto human intuition for some domains
• They do not scale well: time complexity of at least O(n2), where n is the number of total objects.
• Like any heuristic search algorithms, local optima are a problem.
• Interpretation of results is (very) subjective.
K-MEANS
One of the earliest clustering algorithms, was developed by J. B. Mac Queen in 1967,K means is an unsupervised clustering method. Groups data into K clusters and attempts to group data points to minimize the sum of squares distance to their central mean.There are two most important goals:

1- The values within the cluster are very similar.
2- Values outside the set are not as similar as possible
The main idea is to define a the center for each cluster. The number of clusters is determined randomly. The most diffucult phase is the select of k. Because if we choose little k numbers, the objects we want to arrive in different clusters, can fall to the same cluster or if we increase the number of clusters, we disperse the objects too much. After the number K is randomly determined by the person, the cluster centers are selected, randomly. The easiest way to select a center is to choose highly distant data. After center is selected ,clustered according to distance of data using Euclidean connections. Afterwards, new centers are assigned by iteration. These steps are repeated until each data belongs to a cluster. Because the K-Means assignment mechanism allows each dataset to belong to only one cluster.
In conclusion, data set separate k number clusters, after distance of each point  measure to centroid. So calculating mean. The name of algoritm is K-Means.

How K-Means Works

1) Randomly select ‘k’ cluster centers.
2) Calculate the distance between each data point and cluster centers.
3) Assign the data point to the cluster center whose distance from the cluster center is minimum of all the cluster centers..
4) Recalculate the new cluster center
5) Recalculate the distance between each data point and new obtained cluster centers.
6) If no data point was reassigned then stop, otherwise repeat.
ADVANTAGE:
*
It is suitable to run large data sets and maproduce.
*Assuming clusters are symmetric
*Fast, sturdy and easy to understand.
DISADVANTAGE
*Difficult to assign the center of cluster (k).
*İf there are two similar dataset, kmeans can’t *understand there are two cluster.
*
Different results can be obtained with different displays.
*
Random selection of cluster centers is inefficient.
*Algoritm cant run for non-linear dataset.
*
It is sensitive to noisy data. This data is included in the sets.




CONCLUSION OF THE ARTICLE
We examined data mining, and we learnt it is a machine learning. We must have analyzed data so we have to have some methodoliges. Data mining gather all these algoritm. As using these algoritms, to be easy to life. Because we understand, what is meaning a lot data and how can move for the future. Also preparing to strategies is so important. Using todays datas, firms will predict the future and applied some stratejic plans. In Conclusion data mining is most important for Information Era. Because, data can be reached everwhere, but understanding is a science.




Reference
Data Clustering: A Review A.K. JAIN Michigan State University M.N. MURTY Indian Institute of Science AND P.J. FLYNN The Ohio State University
Comparision Between data  Clustring algoritms, Osama Abu Abbas, Computer Science departmanr, Yarmouk University, Jordan.
From Data Mining to Knowledge Discovery in Databases , Usama Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth  
Clustering Algorithms and Evaluations
An Efficient K-Means Clustering Algorithm, Khaled Alsabti Syracuse University, Sanjay Ranka University of Florida, Vineet Singh Hitachi America, Ltd.
Cluster analysis: Basic concepts and algoritms.
DATA CLUSTERING Algorithms and Applications,Edited by Charu C. Aggarwal Chandan K. Reddy
CLUSTERING METHODS, Lior Rokach Department of Industrial Engineering Tel-Aviv University, Oded Maimon Department of Industrial Engineering Tel-Aviv University
DATA MINING AND ANALYSIS, Fundamental Concepts and Algorithms MOHAMMED J. ZAKI Rensselaer Polytechnic Institute, Troy, New York WAGNER MEIRA JR. Universidade Federal de Minas Gerais, Brazil.
K-means Algorithm g Cluster Analysis in Data Mining Edited by Zijun Zhang
K-means algorithm ,Mark Herbster, University College London Department of Computer Science
K -means Clustering  Edited by Ke Chen
K-means Clustering via Principal Component Analysis,Chris Ding Xiaofeng He
An Efficient k-Means Clustering Algorithm: Analysis and Implementation Tapas Kanungo, Senior Member, IEEE, David M. Mount, Member, IEEE, Nathan S. Netanyahu, Member, IEEE, Christine D. Piatko, Ruth Silverman, and Angela Y. Wu, Senior Member, IEEE
Our lessons presentation


25 Ocak 2017 Çarşamba

CUSTOMER RELATIONSHIP MANAGEMENT

CASE:

Never underestimate your Clients' Complaint, no matter how funny it might seem!

This is a real story that happened between the customer of General Motors and its Customer-Care Executive. Please read on.....

A complaint was received by the Pontiac Division of General Motors:

'This is the second time I have written to you, and I don't blame you for not answering me, because I sounded crazy, but it is a fact that we have a tradition in our family of Ice-Cream for dessert after dinner each night, but the kind of ice cream varies so, every night, after we've eaten, the whole family votes on which kind of ice cream we should have and I drive
down to the store to get it. It's also a fact that I recently purchased a new Pontiac and since then my trips to the store have created a problem.....

You see, every time I buy a vanilla ice-cream, when I start back from the store my car won't start. If I get any other kind of ice cream, the car starts just fine. I want you to know I'm serious about this question, no matter how silly it sounds "What is there about a Pontiac that makes it not start when I get vanilla ice cream, and easy to start whenever I get any other kind?" The Pontiac President was understandably skeptical about the letter, but sent an Engineer to check it out anyway.

The latter was surprised to be greeted by a successful, obviously well educated man in a fine neighborhood. He had arranged to meet the man just after dinner time, so the two hopped into the car and drove to the ice cream store. It was vanilla ice cream that night and, sure enough, after they came back to the car, it wouldn't start.

The Engineer returned for three more nights. The first night, they got chocolate. The car started. The second night, he got strawberry. The car started. The third night he ordered vanilla. The car failed to start.

Now the engineer, being a logical man, refused to believe that this man's car was allergic to vanilla ice cream. He arranged, therefore, to continue his visits for as long as it took to solve the problem. And toward this end he began to take notes: He jotted down all sorts of data: time of day, type of gas uses, time to drive back and forth etc.

In a short time, he had a clue: the man took less time to buy vanilla than any other flavor. Why? The answer was in the layout of the store. Vanilla, being the most popular flavor, was in a separate case at the front of the store for quick pickup. All the other flavors were kept in the back of the store at a different counter where it took considerably longer to check out
the flavor.

Now, the question for the Engineer was why the car wouldn't start when it took less time. Eureka - Time was now the problem - not the vanilla ice cream!!!! The engineer quickly came up with the answer: "vapor lock". 

It was happening every night; but the extra time taken to get the other flavors allowed the engine to cool down sufficiently to start. When the man got vanilla, the engine was still too hot for the vapor lock to dissipate.

Even crazy looking problems are sometimes real and all problems seem to be simple only when we find the solution with clear and logical thinking.

Don't just say it is " IMPOSSIBLE" without putting a sincere effort....
What really matters is your attitude and your perception.




´Customer relationship management (CRM) is an approach to managing a company's interaction with current and potential future customers that tries to analyze data about customers' history with a company and to improve business relationships with customers, specifically focusing on customer retention and ultimately driving sales growth

Types of CRM
´Operational:The primary goal of customer relationship management systems is to integrate and automate sales, marketing, and customer support. Therefore, these systems typically have a dashboard that gives an overall view of the three functions on a single page for each customer that a company may have. The dashboard may provide client information, past sales, previous marketing efforts, and more, summarizing all of the relationships between the customer and the firm. Operational CRM is made up of 3 main components: sales force automation, marketing automation, and service automation.
´Analytical
The role of analytical CRM systems is to analyze customer data collected through multiple sources, and present it so that business managers can make more informed decisions. Analytical CRM systems use techniques such as data mining, correlation, and pattern recognition to analyze the customer data. These analytics help improve customer service by finding small problems which can be solved, perhaps, by marketing to different parts of a consumer audience differently.For example, through the analysis of a customer base's buying behavior, a company might see that this customer base has not been buying a lot of products recently. After scanning through this data, the company might think to market to this subset of consumers differently, in order to best communicate how this company's products might benefit this group specifically.
´Collaborative
The third primary aim of CRM systems is to incorporate external stakeholders such as suppliers, vendors, and distributors, and share customer information across organizations. For example, feedback can be collected from technical support call, which could help provide direction for marketing products and services to that particular customer in the future.
A good CRM should provide support for the following functions:
capture and maintain of customer needs, motivations, and behaviors over the lifetime of the relationship
facilitate the use of customer experiences for continuous improvement of this relationship
integrate marketing, sales, and customer support activities measuring and evaluating the process of knowledge acquisition and sharing
CRM systems in practice
´Call centers
´Contact center automation
´Social media
´Location-based services

´CRM systems for business-to-business transactions
CRM ANALYSIS PROCESSES
´LEAD MANAGEMENT The focus of this process is on organizing and prioritizing contacts with the prospective customers. It involves integration with campaign management and service management, as well as customer profiling. A sub process of lead management is customer scoring, which uses quantitative and qualitative measures to rank the customer based on his or her interest in the product or service. This filtering process allows for more precise target marketing and it lowers the contact costs.
´CUSTOMER PROFILING The focus of this process is to develop a marketing profile of every customer by observing his or her buying patterns, demographics, buying and communication preferences, and other information that allows categorization of the customer. The knowledge generated from this process feeds into campaign management, sales management, service management, and the other processes discussed earlier. In addition, this process allows more individualized contact with the customer.
´FEEDBACK MANAGEMENT A good CRM requires a closed-knowledge management loop that consolidates, analyzes, and shares the customer information collected by CRM delivery and support processes with the analysis process, and vice versa. The loop can provide a road map for continuous improvement process for the company's products and services. A good system will discard unnecessary data and focus mainly on the knowledge useful for making better decisions.
CRM COMPONENTS
´MARKET RESEARCH
´SALES FORCE AUTOMATION (S FA)
´CUSTOMER SERVICE AND SUPPORT
´DATA MINING AND ANALVTICS
´MARKET RESEARCH The two key functionalities here are campaign management and market analysis. Campaign management provides support for preparing such things as marketing budgets, ad placement, sales targeting, and response management. Marketing analysis tools provide statistical and demographic analysis, Web site traffic monitoring, and profiling tools. With the amount of data collected today these tools provide sophisticated segmenting and targeting capabilities in real time.
´SALES FORCE AUTOMATION (S FA) Sales force automation software has been around since long before CRM became a buzzword. Some of the current CRM vendors were originally in the SFA market. SFA tools provide basic functionality for sales personnel to automate sales lead distribution and tracking, sales reporting, pipeline management, contacts centralization and management, and group collaboration. In addition, they include such software for sales managers and executives as opportunity management, forecasting, reporting, analytics, and customizable dashboard capabilities so that they can be confident that their teams are producing at their full capacity. The goal of SFA software is to give businesses the upper hand with their sales data and to empower sales reps to spend more time selling and less time on administration.
´CUSTOMER SERVICE AND SUPPORT The customer service function has gone through major changes since the advent of the Internet. Online help desks have become a common source for customers to find quick answers to complex technical questions. Customer service originally consisted of setting up a call center with access to a customer database and the Frequently Asked Questions (FAQs) Web site page. Today, with sophisticated CRM back-ends, companies have been able to consolidate the two areas into help desk support centers. Customer service functionality typicaUy includes help desk ticket management software, e-mail, interactive chat, Web telephony, and other interaction tools connected to a fully integrated customer database, which is connected to the supply chain management and ERP application. These tools can be accessed by a trained help center agent or by customers directly via the Internet.
´DATA MINING AND ANALTICS The amount of data being generated by the Web-driven business has been a driver for data mining and analytics functionality because it represents an extension of existing product lines rather than the creation of new ones. Such businesses as Amazon and eBay generate gigabytes of data per day, and even small Web sites easily generate megabytes of data. This data must be collected, sorted, organized, and analyzed for trends, demographics, cross-selling opportunities, and identification of other sales patterns. Sophisticated OLAP and data mining software are often integrated with CRM packages. 

24 Ocak 2017 Salı

ERP IMPLEMENTATION LIFE CYCLE

ERP applications are prepackaged software developed by commercial software vendors and custom installed for organizations to automate and integrate the various business processes. Although ERP are packaged software they are very different from PC-based software packages (e.g., Microsoft Office or other software) tha t you may have purchased for personal use as shown in Table 4-1.
ERP IMPLEMENTATION PLAN
There are three major implementation plan choices are:
1.Comprehensive.
2. Middle-of-the-Road.
3. Vanilla.


Methodology refers to a systematic approach to solving a business problem. ERP methodology builds on the theory that an enterprise can maximize its returns by maximizing the utilization of its fixed supply of resources. Information technology, with its increasing computer power and the ability to correlate pieces of information, has proven to be the best tool for business problem solving. Like SDLC, an ERP development life cycle provides a systematic approach to implementing ERP software in the changing but limited-resource organizational environment. There are many different vendor-driven methodologies or approaches that use traditional ERP development life cycle or rapid ERP life cycles (e.g., Total Solution, FastTrack, Rapid-Re,ASAP, and BIM)
TRADITIONAL ERP LIFE CYCLE
The traditional ERP life cycle includes the following major stages:
Stage 1. Scope and Commitment Stage.
Stage 2. Analysis and Design Stage.
Stage 3. Acquisition & Development Stage.
Stage 4. Implementation Stage.
Stage 5. Operation Stage
ROLE OF CHANGE MANAGEMENT
Change management (CM) plays an important role throughout the ERP life cycle. System failures often occur when the attention is not devoted to this from the beginning stages
 RAPID ERP LIFE CYCLES
They provide different methodologies and techniques for rapid or accelerated implementation. Scripts and wizards provided by consultants can help automate some of the more common tasks that occur during an implementation. These include migration of data, identification of duplicate data, and other standard tasks.
The appropriate implementation model may vary based on company, culture, software, budget, and the purpose of the implementation, but previous implementation experience of the program management and consultants will likely be the'largest driving factor in determining the best approach
TOTAL SOLUTION
1. The Value Proposition.
2. Reality Check.
3. AlignedApproach.
4. Success Dimension.
5. Delivering Value.
FASTTRACK
Phases Designed to reflect and integrate decisions regarding business redesign, organizational change and performance, training, process and systems integrity, client-server technologies and technical architecture.
 1. Scoping and Planning: Project definition and scope. Project planning is initiated.
2. Visioning and Targeting: Needs assessment. Vision and targets identified. As-is modeling.
 3. Redesign: To-be Modeling. Software design and development.
4. Configuration: Software development. Integration test planning.
5. Testing and Delivery: Integration testing. Business and system delivery.
Areas In addition, it identifies five areas (groups) as an individual thread to be woven into a cohesive fabric through its five phase workplan. The areas and a list of the functions performed are as follows:
1. Project Management (project organization, risk management, planning, monitoring, communications, budgeting, staffing, quality assurance).
2. Information Technology Architecture (hardware and network selection, procurement, installation, operations, software design, development, installation).
3. Process and Systems Integrity (security, audit control).
4. Change Leadership (leadership, commitment, organizations design, change-readiness, policies and procedures, performance measurements).
5. Training and Documentation (needs assessment, training design and delivery for project team, management, end-users, operations, and helpdesk. Scripting of enduser and operations documentation).
RAPID RE
Gateway, a consulting firm in New York, has developed an ERP life cycle methodology called Rapid Re. The five-stage, 54-step modular methodology is customized to the needs of each project because that is what happens in practice. Individual projects skip, rearrange, or recombine tasks to meet their needs or give greater or lesser emphasis to some tasks.
Stage 1. Preparation. Mobilize, organize, and energize the people who will perform the reengineering project.
Stage 2. Identification. Develop a customer-oriented process model of the business.
Stage 3. Vision. Select the processes to reengineer and formulate redesign options capable of achieving breakthrough performance
Stage 4. Solution. Define the technical and social requirements for the new processes and develop detailed implementation plans.
Stage 5. Transformation. Implement the reengineering plans. In an ideal project, stages one and two consider all key processes within a company and conclude with a step that sets priorities for the processes to reengineer. The other stages are executed repeatedly for each process selected for reengineering.
ACCELERATED SAP (ASAP)
The ASAP Roadmap is a detailed project plan by SAP that describes all activities in an implementation. It includes the entire technical area to support technical project management, and addresses such concerns as interfaces, data conversions, and authorizations earlier than do most traditional implementations. The ASAP Roadmap consists of five phases: project preparation, business blueprint, realization, final preparation, go-live, and support continuous change.
Phase 1. Project Preparation. Proper planning and assessing organizational readiness is essential. Determine if there is a:
full agreement that all company decision makers are behind the project
clear project objectives
efficient decision-making process
company culture that is willing to accept change.
Phase 2. Business Blueprint.
Phase 3. Realization.
Phase 4. Final Preparation.
Phase 5. Go-Live and Support.
BUSINESS INTEGRATION METHODOLOGY (BIM)
The BIM methodology, developed by Accenture Systems in the 1990s, is targeted for full-scale ERP projects that diagnose business integration needs, design business strategies and architectures, deliver one or more business capabilities to meet those needs, and ensure that the value of those capabilities can be sustained over time.
1.The Planning Phase.
2.The Delivering Phase
3.The Managing Phase.
4.The Operating Phase.


Kategoriler