0% found this document useful (0 votes)
384 views173 pages

s4 Hana Sales 1809

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
384 views173 pages

s4 Hana Sales 1809

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 173

S4 HANA SALES-1809( Delta Change)

By Anindya Sunder Bhattacharjee 534132

OLTP- Online Transaction Processing- Present in ECC

OLAP- Online Analytical Processing- Present in BI/BW

In S4 Hana system, this OLAP and OLTP are integrated also with planning like MRP Live , DDMRP, aATP,
PPDS. In ECC, , we have different sub systems of BI BW APO due to database in capabilities. Due to
technical constraints, we now came up with HANA Database. So now S4 HANA is with OLAP + OLTP +
EWM+ TM+ Plannings like MRP Live, DDMRP , aATP, PPDS ( Production planning and detail scheduling , a
part of APO).

SAP HANA is having analytics, data model, XS application layer, Libraries so that we could do coding like
SQL.HANA Studio is introduced so that we could access SAP HANA Database.

Sidecar Scenario:

ECC working on any database is known as main car scenario. Now from here, data are getting extracted
from ECC and loading to HANA Database using SLT software and in HANA , we are doing reporting and
analysis in HANA is called Sidecar scenario. Here in HANA with HANA live , we are putting BI tool Lumira
to achieve this reporting output. All transactions are happening in ECC as it is main car and reporting to
HANA as sidecar. For ex, we need MARA MARC MARD EKKO tables for reporting purpose, then these
tables are bringing to HANA using SLT and here HANA Live gets report generation. HANA Live is a tool to
generate report. We connect BI tool Lumira with it, it will easily consumes HANA Live views and creates
graph Pie charts. Benefits of this approach is we are not disturbing main system.

What is SLT:

System Landscape Transformation- a middleware. Problem of sidecar is that it is increasing data foot
print as we are coping data from ECC to HANA. Also there is lag time of data transfer of 3-5 secs,
sometimes there are issues with SLT tool itself. HANA Live do database table linkage and create
database model. SLT is for replication purpose from ECC to HANA Database. We could also use BODS
tool ( Business object data service) instead of SLT cause SLT sometimes adversely affect performance.
SLT is ETL. ETL means Extract Transport Load.

Suite On HANA:

ECC on HANA Database with EHP7/8 enhancement pack. Main thing is OLTP and OLAP together.

Just FYI- Table ACDOCA is for FI Universal Journal. FI and CO together forms it.

Through SAP Web dispatcher,


we could see in PC Mob Tab
How Could we get output of S4 HANA:
Front end server like Fiori Net
weaver

Backend Server means SAP S4


HANA and SAP Net weaver with
SAP Hana Database
compulsory.
Fiori app also having version like 1.0 2.0 and it will only work specific to version of HANA DB , S4 HANA ,
Net weaver versions. S4 HANA Simple Logistics sometimes known as MM& O

How data footprint gets reduced:

1.Compression

2.Column Store

3.Removal of Index Tables- to understand index table, take ex of book where to get a chapter, we first
check with index of it and from that index, we get page no. So with help of index we are able to search
our chapter so quick and if there is no index, then we need to search for whole book to get a topic. But
on the other hand, these index are taking some pages of that book. Index tables are same like this ,
faster result but high data footprint. For ex, for VBAP table, ABAPers have created 6 different indexes
which in background , creates 6 different tables. So our data footprint is getting higher. Now in S4 Hana,
all index tables are eliminated using column storage which is known as inverted index. Here each
column acts as index. To get index table, use table DD12L in SE16N. If we execute , we could see that
INC/EX column is excluded due to Hana Data base ( HDB).

See Below from S4 HANA,

Screenshot from ECC:


In row database system stores data side by side and for any query , system have to go through all to
fetch data. For column base data structure, main thing is system will only store unique values means if
there are repetition of customer name , it will only store it just one time whereas for row base storage,
repetitions are there, thus column base storage reduces data footprint. Column based storage known as
inverted index. All records are linked with each other through record ID. Each column have a reference
and that’s why no index tables required.

Aggregates tables like VAKPA VAPMA VLKPA VLPMA VRKPA VRPMA are removed. Mat related tables
MKPF and MSEG are now in MATDOC and MATDOC_EXTRACT tables. Data footprint is now one tenth.

Now question is as in S4 hana , don’t we get any data if we execute MARD table? Ans is yes we will get
it. In SE16N, when we put MARD table, it is calling proxy object and that proxy object is assigned to CDS
view and that CDS view actually fetching data from MATDOC table. Due to this, we could still se data in
MARD table. CDS View is a virtual data model , one type of prog written on SQL and runs on HANA
database. This is mainly to support customized Z Prog which are taking data from tables which are
obsolete in S4 HANA. Customized Prog will not be affected as it still fetch data from Obsolete tables.

App layer( Presentation layer)

Proxy Object

CDS View Prog

HANA DB

Matdoc Matdoc_extract

( Mat quantity calculation on fly)

CDS view , a prog written on app layer do calculation on fly. Embedded analysis happen in CDS. CDS is a
prog which combines multiple tables , do calculation and fetch out result. When required, ABAP creates
custom CDS Views.
MATDOC VS MATDOC_EXTRACT Table:

In MATDOC_EXTRACT table, we get monthly aggregated stock whereas in MATDOC, we have details of
each line item. So if there is req for mat report, CDS View prog fetch that details from
MATDOC_EXTRACT table which is mainly used for reporting and view purpose. When data come, both
tables get updated but for reporting, mainly matdoc_extract table is used as it is having less no of line
items and only gives monthly aggregates.

Please note that proxy object was in 1610 but in 1809 , it is known as Replacement object. To get it , put
any table like MARD in SE11 , execute and from extras , we will get it. This replacement object are linked
with CDS view.

MATDOC and MATDOC_EXTRACT tables are interlinked together with GUID like below,
4. We could move data to cold layer in HANA DB which is actually archiving of data. Due to this, data is
also reduced. Based on data policy , we put data on cold storage . After 2025, SAP will go to reduce data
more based on event oriented data model where all tables will be reduced to just 10-12 tables only.

HANA Studio:

It is used by Data modeler admin basis developers and to some extends functional one for adhoc
analysis and Hana live ( for sidecar Scenario). SAP HANA DB works on schema concept. Data stored in
schema.

These are schemas

Package View Lumira BI tool for

Reporting

Here for ex, table MARA is saved in schema and schema name will be like MARA.Schema1. Typical ex of
schema is S4HANA , S4ABAP1 etc. So users using S4 HANA , will be having access for that S4HANA
Schema, means users could view data from that schema. We could combine different schemas and form
one package, in package , we create view which is know as graphical data model GDM and here actually
we combine data from different schemas and we link it to Lumira BI toll to have report. To see which
schema we are using , we could use System—Status—Schema. Data are getting stored in schemas.

Certain tables are obsolete in S4 Hana but we are still able to see those because that table for ex, MARD
is having Proxy object/ Replacement object which is linked to CDS View and that CDS View extracts data
from MATDOC_EXTRACT ( as here we have used ex of MARD Table) and that’s why we could see that in
MARD in SE16N.

One system data will be added in one schema or one DB could have several schemas.

In Fiori , we have query browser to see all CDS view which could generate report. Customer CDS Views
could be created by abapers. CDS Views increase performance. Custom ABAP prog could be written in
SAP GUI but for CDS View creation, it is only possible in HANA Studio. In HANA Database, we have
catalog and under this catalog, we have schema. We could generate graphical report from HANA Studio
itself means table level report generation could be possible at database level but this reporting is for
consultant level not for end user.

Query browser for CDS View:

Aggregates tables are aggregation of transactional data on certain attributes and dates. In functional
spec, for tables which are not in use like MARD , we maintain table name along with Proxy object and
CDS View so that abapers could take from CDS View and create report.

What is in Memory DB:

In traditional DB, due to rotating disk and drum, we cant loose data even power off also. Now in
memory DB, data are not stored in disk and drum, data stored in charge means capacitor and transistor.

……………………………….

……………………………....

These are charge and capacitor transistor.


In Memory DB is also known as dynamic RAM, data extraction speeds from these charge and capacitor
transistor which are much fast than disk drum DB. Also in memory DB, data input output does not
happen like traditional DB , that’s why so fast but for power failure, in this in memory DB, we loose data.

In S4 HANA, we have both in memory and traditional DB like below,

User Transaction

In Memory DB

Save
Point

Traditional DB

Log file

Because of in memory, transaction and reporting are fast and due to presence of traditional DB, there is
no data loss in S4 HANA. Every 10 mins, in memory data are getting saved and stored to traditional DB
through save point. For ex, save point runs 7 am and then 7.10 am and now in between for ex 7.03am
data comes in and 7.08am there is complete power failure. So it means from 7 to 7.08 am all data
should be lost but SAP comes with log file in a flash disk and user transactions always go two places in
memory and log file where log file saves data but when save points run and put all data in traditional DB,
this log file gets deleted.

OLAP and OLTP always in memory and user interaction always in memory. Sizing of in memory tradition
DB and Flash disk depends on project to project.

Dat In memory layer


a
Data pushed down

( Hot Layer)

Persistent layer- Data Empty


traditional DB
Dat
a ( Warm Layer)

**

We could push data from in memory to traditional DB ( warm layer). This is feature of HANA in Memory
that in memory could not be filled 100% with data and certain part should be must be empty so that it
could interact with user and on fly calculation could happen. That’s why there should be memory
available in memory.** When blank one starts filling , then HANA Admin should check which data have
not been used for weeks by running report and then HANA Admin push down data from hot layer to
warm layer ( traditional DB). If those pushed down data are required, then those are pushed up again to
in memory. There is also one layer below warm layer known as cold layer and used for data archieving.

Delta and main storage Concept:

User---- write Delta

Main storage

To improve write performance, we use this concept. When user wants to write , each tables divided into
two means VBAK table should be VBAK Main and VBAK Delta. When there is data input, it first fills delta
and then batch jobs runs and it fills main storage and at this point , compression happens. For OLAP, it
reads both tables and give result.

Side Car scenario: MARA,


ECC KNA1
Hana Live with
Lumira

DB

SLT

HANA DB has graphical data models and we combine different tables like MARA KNA1 etc and creates
View. A virtual data model ( VDM) covers CDS View and graphical data model. Graphical data model
does not have any code but CDS is coded. Graphical data model is on HANA Database whereas CDS
Views are on application layer. Now through HANA Live , we could see reports. HANA Live also suggests
that for any soughed reports, what tables are needed. Through Lumira, we could see reports like graph
and pie chart. HANA Live tells which all tables we need to bring up from ECC. Lumira BI tool should be
linked to HANA DB not, S4 HANA.

CDS View in S4---- Embedded analytics--- Fiori --- Report generation

Data modelers create packages and in package , we have views means which tables are used to create it.
There is a doc which tells for which business query , which views could be used.

Embedded Analysis:

Within two HANA DB, integration could happen through SDI- Smart data Integration tool, there sould be
no data replication, only two could access each other data.

Analysis in S4 HANA is known as embedded analysis. This is also known as out of box as this analysis
happens without help of any Lumira or HANA Live. This analysis directly happens in Fiori. Below
mentioned different analytics as ,
1. Individual apps- Specific business questions, KPI like open contract , open SO etc. Single Fiori
Tile.
2. Overview Apps- We call it dash boards / story boards.
3. Query Browser- Linked with CDS Views, it analyses data and do graphical representations.
4. Advance analytical App – from 1709, PO monitor , sales order monitor
5. Analytical + Transactional – Inventory mgmt.

Query Browser is generic means for any module, report generation will be same.

Embedded analytics means Fiori itself having BI tool integrated and by using this, we could do reporting
as above in Fiori itself. CDS View is a prog on ABAP on HANA or SQL.
Scenario

Green Field Implementation Conversion Landscape Transformation

A. Greenfield Implementation: New S4 HANA Implementation. First we do our config , customization


and Z Development. Then from legacy system , may be SAP ECC or Non SAP, we bring master data
through ETL tool to S4 HANA. We could also download data in excel file and then we could upload excel
to ETL tool to upload that. Once this done, then we could do with transactional data. ETL means extract,
transform and load. We could use HANA Migration tool LTMC and also LSMW as ETL tool but they need
excel to upload where as if we use BODS ( Business Object Data Service) tool which is also ETL tool and
could do extraction by its own and upload that data. BODS sends master data to S4 HANA by form of
Idocs but only master data, no transactional data. Open transactional data like open order open del
could be transferred using LTMC or LSMW. No transactional data , historical data, no customization and
no Z development are moved from ECC to S4 HANA in this scenario.

B. Conversion: Existing ECC system converted to S4 HANA system. Database shifts to HANA Database
and app layer shifts to S4 HANA app, data model shifts to new data model , ABAP Code changes to
support new data model. Here we got all master data , transaction data , configs, Z dev. For this type of
conversation, Basis use SUM + DMO. SUM is software update manager and DMO is Data migration
option. It changes ECC to S4 HANA in a single step. But before doing that, we need some prechecks.
There are some actions on ECC side and after conversion on S4 HANA Side. With ECC system, there are
several interfaces and 3rd part apps could be attached. Now basis team with help of PAM ( Product
availability Matrix) checks compatibility of ECC system as well as peripheral systems to ensure it could
be converted to S4 HANA or not and peripheral system should be able to connect with S4 HANA system
or not. After PAM analysis, we take decision of go ahead with conversion or now.

Green field Vs Conversion:

1. Landscape Complexity- Green field less complex.


2. System complexity with Custom Code – We check how much code to be converted by use of
tools ATC and SRC ( System Readiness check), conversion efforts.
3. Cost
4. Technical Feasibility- Maintenance planner let us know this one.
5. No of add one on ECC- Maintenance planner and SRC could check. If there is SRM, GTS , AFS
with ECC, no conversion. We mainly do green field as only very few system goes for conversion.
We could add APO with S4 HANA but PPDS should not be active in APO as PPDS is not integral
part of S4 HANA now.
6. Historical data – If company does not want to loose historical data , then conversion. For green
field, no historical data in S4 Hana but all historical data from ECC are actually put into BW cube
and that put into BW System.

C . Landscape Consolidation: Suppose we have 4 different ECC systems. Now one ECC gets
converted into S4 HANA. Then other ECC do CCS – Company Code Specific conversion which could
only be done by SAP Team by certain tools. Main prob is there could be overlapping for company
code , mat , doc no , transaction data as same number could exists in 2 / 3 systems. That’s why we
need CCS to avoid this.

Adoption Path
Client is on ECC and they want to move to S4 HANA. First approach is Bottom up where client could go
to Suite on HANA – database with HANA with ECC with EHP7/8 patch. Here database will be changed to
HANA but not data model means old Z Reports , all enhancements in ECC would be there , only database
change. Next step will be Simple Finance which is add on ECC layer. Then next step is to move to S4
HANA.

2nd approach is from ECC to S4 HANA which is mostly recommended. Database change done by SUM
tool. People wants to do first ECC to SoH ( Suite on Hana) due to lesser cost and very less impact of
existing functionality.

For S4 HANA from ECC, ECC version is ECC 6.0 with enhancement pack 7/8. If pack is less that 7 , then we
need to first upgrade it to 7 / 8 and it should be Unicode enabled. For DB of that ECC, we need to use
upgrade patches as per recommended by PAM Tool and then that DB should be changed to HANA DB
and ECC to S4 HANA.

S4 HANA technical tool for System Conversion:

Preparation Phase Realization Phase

System Maintenance Pre checks – Custom Code Software update Application specific
Req- Planner Simplification Preparation- Manager ( SUM) follow up activities-
PAM check report ABAP Test post conversion
and and Cockpit functional activities.
SUM simplification
tool item category

SAP S4 HANA Readiness check


Basis team will check system req by PAM means for conversion to S4 HANA, what are technical things
should be there like system should have Unicode, certain version of net weaver like 7.3 and above,
database patches applicable to DB change.2nd is maintenance planner which is web based tool having
our landscape data, helps us to understand that technical compatibility like add ons which are on top of
ECC- are supported or not. Next it checks business function, technical component like net weaver
version, also checks industry specific solutions. If we have added on like SRM, we can’t change to S4
HANA. If certain business function is on in ECC and in S4 HANA, that should always be off, then in this
case maintenance planner will say no conversion. To check business function use Tcode SFW5. Now
basis team generates maintenance planner report and functional consultant will check this report and
provide their feedback. Basis team connects landscape specific data to maintenance planner.
Maintenance planner is used for conversion from ECC to S4 HANA and also S4 HANA Version upgrades.
Once we run maintenance planner, it generates report with details of business functions, add ons, SAP
Note details. Whenever we approach client for S4 , we first tell them to check with maintenance
planner. Next step prechecks is by functional consultant and Custom code prep by ABAPers.

Pre- checks:

SAP Provides,

1. Simplification List which is PDF – what are changes , business impact, actions, past actions,
simplification list will tell you what functions are available in ECC and changed to S4 HANA. This
doc is exclusively to support conversion process..
2. Simplification Catalog- 1709 onwards, web tool to check SAP Notes.
3. System Readiness Check ( SRC) – a tool with SOLMAN- Check SAP S4 HANA Help portal as below
fig and there we will find SAP Readiness check. Click on it and we will get SAP Notes to be
implemented with name of prog that should run in SA38 and we also get a link to upload those
data in SRC and then result of that data from SRC will show readiness.
4. Simplification Check Report ( SCR) – a prog running in ECC identifies based on usage what are
simple line items of applicable process. SRC and SCR are tools which tell us what are relevant for
us. From SAP help portal for S4 HANA help portal , we could find Product document and
conversion and upgrade asset options. It is purely functional duty.

https://help.sap.com/viewer/product/SAP_S4HANA_ON-PREMISE/1809.002/en-US

To get this SCR report, in ECC, go to SA38 and use report /SDF/RC_START_CHECK. When executed,
we could see name of notes we need to use. We need to ask basis team to upgrade these notes.
First we need to do it in sandbox then Dev Quality Prod etc. Executing that prog will give us insights.
This prog we use for ECC to S4 HANA conversion and S4 hana Version upgrade. In ECC, we could see
relevant column providing relevant changes. We have note here also. Error with red one, select that
red one and click on check consistency details, to get details of the error. Once we rectify that error,
rerun this prog and do check consistency for all. This process will check all inconsistencies in
business processes , master data , data model etc.

Below one we run in S4 hana while we want to upgrade S4 HANA Version,

5.Custom Code Prep: by ABAPers, Std code will change from ECC to S4 HANA with no problem but issue
is with custom Z Codes, enhancements while in conversion, it gets impacted. We use ATC- ABAP Test
Cockpit tool where we attach ATC with ECC through RFC connection and then ATC will tell what custom
codes are allowed.

For conversion, first copy a Dev system , do prechecks, configs etc then convert it to S4 HANA and do
post steps of BP and FI. Now move those TRs from D to Q and then convert that Q to S4 HANA. Do same
for P also. SAP recommends to do conversion in sandbox first and if business is ok then go ahead.
Conversion Project could be of 6 months to 18 months.

FIORI
Problem with SAP GUI – have to be installed , not friendly interface, need to remember Tcodes, no
search option and needs user training to use this. Now Net weaver, we could use it on web browser but
it doesnot work with mob and tablet as NWBC is not simplified one.

Fiori is that interface which could work on web , mob , tablet. Fiori works on UI5 technology which is
written by HTML language. Actually through Fiori , we can do both OLAP and OLTP. Pref browser is
google chrome. Fiori is based on role , tiles of Fiori comes from role assigned to PFCG. Also Fiori comes
with search option.

User

Mob Tab Web

Web dispatcher

Firewall

ABAP Front end – Fiori app layer

OData- a prog that reads ABAP code and transforms it to HTML/ Webbased.

ABAP Backend

HANA DB

Above one is Fiori Architecture.

Imp T- Codes:

LTRC- SLT CONFIG AND MONITOR

IUUC_REPL_CONTENT – Modify Tables structure for SLT.

LTMC – Migration Cockpit

LTMOM – Migration Modeler

/n/ui2/flpd_cust- Launchpad designer

/n/ui2/flp – Fiori Launchpad

FLETS and OMSL – Mat length extention


This portion of Fiori is know as Business group or just Group

Below red marked portions are known as Tiles.

As we could see that under group, we have tiles. From Fiori, if we try to create SO/PO etc, it will open a
web page which is NWBC ( Net weaver). Fiori is linked with net weaver like this.

Any transaction created in Fiori is created by UI5 Consultant. In Fiori, if we click on App Finder , we will
be having a list which is known as Catalog as below,
This catalog is linked with our role in PFCG Transaction. If we click a catalog, we could see what
applications are there,

Here we could see that pin option is there. Inactive pin means those are not relevant for our role. User
role is assigned with catalog and business grp, not with applications. We choose apps from catalog and
add it to business grp. So in Fiori, under business grp, we see apps. If app is missing in catalog, we cant
use it even if it is in business grp. User could change business grp and app and that will be specific to him
only. We could do change in business grp in “ Edit home page” option. New business grp could be added
, old could be removed , rename etc.
In Fiori , we could mention default value like below,

We could bring data from different sources to HANA DB by tools like SLT / BODS / SDL . That data reside
in schema. We could compile that data with HANA DB in CDS View which will be gone to S4 HANA. In
certain scenarios , BW takes data from different sources and BI layer takes data from both S4 HANA and
BW and generate report.

How to see that it is row or column storage:

Put table in SE11, and then do like this,


FIORI Library:

Link - https://fioriappslibrary.hana.ondemand.com/sap/fix/externalViewer/index.html

Click on SAP Fiori Apps for SAP S4 HANA to get all apps related to S4 HANA in Fiori.
For ex, we have taken Sales order fulfillment issue, where we could see where this app will be
applicable, App id with implementation details. FYI – There are different types of app – Transactional ,
Analytical and factsheets.

“Implementation Info” tab is very imp

1. SAP Notes is for Basis team


2. Installation – Front and back end is for ABAPers
3. Configuration – For functional consultants as it is having info for Odata , role , Catalog , grp
details. We also have technical name which is actually app service and this have to be activated
by Basis team. Odata service also has to be activated by basis. Role needs to be put in SU01 or
PFCG. Actually we don’t directly assign business role to end user. This is for core team members
only. For end user , we use business catalog , and we actually create Z catalog and then Z
group , and that will be assigned to end users. Cause if we directly assign mentioned role here to
PFCG, end users will be granted with huge access which are not required. That’s why these Z
catalog and Z Group creation. Once we assign role in PFCG, we will be able to use that app in
Fiori. That’s why it is role based.

*** If a Fiori app is not working, we could clean our internet cache memory or in SAP , in SA38,
use /UI2/Invalidate_global_caches and execute to clean it.
If we need several app, we select and click aggregate tab as below,

Fiori Launch Pad designer:

T code - /N/UI2/FLPD_CUST

Not for end users, only functional and technical consultants. We create business catalog and business
grp here. In Fiori app itself , we create catalog and grp but that will be only user specific but here
whatever we will create , that will be for global users.

In Fiori launch pad, we could see catalog and grp options as above. + Button is to create new. It is
recommended to create with reference.
To create with reference, first search catalog and then drag it, then there you will find a popup that new
catalog with reference as below in blue colour.

Put title and Id, click on copy and we could add our own app here.

Here apps could also be deleted by dragging


Once we are done with our Z Catalog, then click on group,

Click on + sign and put our title and ID, tick on enable users to personalize their grp makes them editable
in Fiori.
Now show as tile option, select our catalog and add it.
Now in SAP, through PFCG , under menu , there is option to add SAP Fiori tile catalog and SAP Fiori Tile
group using that ID which we created during category and business grp creation. Now put that user id in
user field means that user is granting access for that Fiori app. From Fiori library, we could take out
catalog info and then put it to Fiori launch pad designer to search for catalog.

Activate Methodology:

ASAP methodology for ECC is waterfall methodology means one phase completed then another phase
starts. But activate is agile methodology. SAP combines Business Process, best practice, Guided
Configuration, Test scripts into Activate Methodology.

1. Best Practice – What should be the best way to run a business process like steps, responsible
person, documents required as per recommended by SAP, a guideline from SAP. Now as per
Activate methodology, SAP provides best practice client where we get configuration, model
company, master data. Major and highly used scenarios are configured by SAP. When we buy
std SAP , Best practice does not come but Basis needs to install it and it takes 3-5 days and this is
only installed in Sand Box system. Consultants show best practice sandbox system to client and
Clients could go ahead and do exploration sandbox with Fiori and get to know S4 HANA system.
2. Test Scripts- All data required for execution of test , how to test
3. Business Process Flow / Business Process mapping ( BPM) – A Flow chart by SAP.

Please note that we don’t install best practice in Dev , Q or Prod. We use SAP provided guided config
here for our own config. Best practice only for sandbox and used for client demo.
Tcode /n/smb/bbi

Click on different link to get process and steps. Use only activated one mean with green right ticked.

https://rapid.sap.com/bp/ - imp link

Click on S4 HANA – On premise – SAP Best Practice for S4 HANA ( on Premise) – select version
Inside the content library you find:

 Predefined processes (scope items)


 Test scripts
 Process flows in modelling notation BPMN 2.0
 Factsheets
 Configuration guides
 General information
Best practice consists of Business process flow, Best practice client, std config, model company, Test
scripts, guided config.

Guided Config – 1. On premise, 2 Public cloud. Basically guided config is meant for public cloud and for
on premise, we use SPRO or solution builder for config. Guided config are nothing but config guide in a
word doc. For public cloud , we have two types of guided config- self service config and expert config.
For self service config, through Fiori, we would be able to do basic config like org structure and expert
config means new doc type , new item cat and it is done by SAP itself by sending your excel fed with
data to SAP.

/N/SMB/BBI – is solution builder which could be used just like SPRO where we need to click on Building
block builder and then select process from scope item.

Methodology: We could see methodology from roadmap viewer and activate Jam community which is
just like our facebook kind.

https://go.support.sap.com/roadmapviewer/
There are few terminology which we need to know,

Workstream – Deliverables – tasks—accelerators

Workstream – grp of people, project management. Within workstream, we have deliverables which are
linked to phases like deliverables of prepare phase should be completed within prepare phase , not in
explore phase. Deliverables consists of tasks and SAP provides accelerators so that smoothly we could
do project. Accelerators mean docs , SAP Notes web links. Please go through above Site first.

Activate Methodology:

1.Discover: Working with business case, decision to take that we should go for S4 Hana or not. Presales
team mainly works here. Decision on premise or public cloud, green field or conversion decision making
etc.

2. Prepare: Project planning, preparation, kick off, team finalization, on boarding of team, project plan,
charter doc prep, modules adoption, change management, high level risks. Prepare sandbox system with
best practice, also do copy of prod for conversion.

3.Explore: Design activity, validate solution using S4 HANA Sandbox, Fit gap analysis, create model
company structure, delta design, sprint planning, custom code analysis, ABAP test cockpit, running
simplification check, action items, SUM tool.

4.Realize: Execution, development, config, prep and upload master data, if it is conversion project then
all conversion done here, testing like sprint test (Unit Test), string test (Multiple sprint test), integration
test and UAT.

5. Deploy: Cutover activities, ramp down/ up plan, business readiness check, setup prod system , master
data load, pre and post checks, smoke test and full dress rehearshal

6. Run: Intensive care, handover with support team.


Water Fall methodology means one by one step means one step completed then we start with next
one. Mainly it consists of Info gathering, Design, Testing, Cutover, Go Live, Support. Main reason of
failure of water fall is that we depends on info from end users and whole project build on it and it could
happen that those info might not be right always , also long time implementation and some time over
budgeting.

Agile Methodology – here we involves end users and breakup req into must have , should have , could
have , would have. Based on that , we first take up must have and certain of should have and do
implementation by preparation , explore , realize , testing (UAT) , deploy and then again start with next
ones. In explore phase, we start showing end to end scenarios using best practice into sandbox, and
after that we start gathering req and those req are divided into must have , should have , could have,
would have. In agile, we don’t take req, end users have to write those down which is known as user
story ( Excel Type).

Migration Tools:

1. HANA Migration Cockpit- LTMC where user downloads excel , fill data into it and upload that.
2. LSMW
3. BODS- Business Object data Service and Rapid Data Migration
4. Own Z Prog to upload
5. 3rd Party tool like winshuttle

For conversion project, as data migration , we use SUM+DMO. SLT is for sidecar approach. LTMC could
be used in conversion also but provided we need to create new master data only, not for old mat
transfer.

HANA Migration Cockpit- LTMC and LTMOM:

In LTMC, we download excel template, fill with data and upload it to get master data which is mostly for
end user whereas in LTMOM , it is for consultant level where we could do field modification and
enhancement by adding Z Fields.

LTMC:
1. T code is LTMC
2. Putting LTMC T Code will open HANA Migration cockpit in a web page
3. Sometimes we could see web page like below
4. Need to click below to proceed further
5.We could see SAP Net weaver like below:
5. Use your user id and password which you use to login into S4 System
6. This screen will only open if your first login has failed and you have loged on two times
7. Below is data migration tool page

8.Need to click create

8. Need to put project name, select transfer data from file is the option where we use excel file to upload
data and second option is where we do transfer from BODS where using BODS, data are getting stored
into HANA Database itself and then we select those. Mass transfer id is a very important id as through this
we could import meta data from quality to production but only meta data means this migration project
setup we could transfer, we don’t have to create again and again.

9. Have put data and clicked on create option. Mass transfer id should be of 3 characters.
10. We need to select whatever master data we are planning to use like material master. In highlighted
documentation, we have show which helps us to understand what exactly we are planning to do.

This one we will get if we click on show .


11. Now we are clicking on material

12. Download template


13. Attached is how a file looks like,

EN_Material.xml

14. Several tabs looks like,

15. Please know that yellow highlighted means we need to fill data there. Please see we are putting data
mainly with * one and material type , industrial sectors are not at all matched with our SAP one right.
16. Here whatever data we have filled in different views, we need to tick X here.

17.Upload file
17. We could see that mandatory fields we have missed so we have to rectify those.

18. After that validation we need to select and activate.


19.Activate

19. Start transfer

21. Click start transfer.


22. Select Error:

23. Click on next to get this screen where we actually do mapping. material type , industrial sectors are not at all
matched with our SAP one right. Here we do that mapping.
Select all and click on confirm mapping.

After confirm mapping, we could see that exact two fields which requires mapping.

For Industry sector key, it has taken M as below that’s why we don’t have to change anything for it.
24. Click on Name, now change below highlighted one with our SAP Value
25. Click next
26. Click on next to complete
We can’t change mat using LTMC. Suppose we have 3000 mat and in which 40 mat have error. Then
system will create a file name “Delta” and we could download that file and then del the file over here.
We do change or correct values here and then reupload that file. There is another way of mapping,
select project name, open it, select migration object then setting and here we could do mapping. For
LTMC, we don’t need any separate license, no separate infra and team whereas BODS requires them all
with extra cost. But BODS is a recommended one as it handles data in a very good manner but as it is
costly, people use LTMC. But for large implementation, where we use 10-12 SAP Systems, we need to
use BODS.

LTMOM:

LTMC is for data upload whereas LTMOM is for changes in std template. So as per req, we do edit in
LTMOM like adding Z Fields, removed not required fields, make fields mandatory for upload, create new
template from scratch etc.

To start, click on below “Migration Object “, Search for Project, means whatever we are changing, that
change should be within that project only. Select your own project and object.
In Left side, click on source structure, which will open all fields available in template. Source structure
means our excel. Now it is recommended that we should not delete , rather we should follow below
path,
Once we are done with visible / Not visible / Required, we need to click on below one to generate. Now
excel template with our choose value will come down.

Z Fields could be added by using + sign as below but remember those Z Fields should be mentioned in
BAPI structure as BAPI will be filling those Z Structure in S4 HANA. Here we need ABAPers.

To get BAPI details, click on Target structure, we need ABAPer to add our own Z Fields in that BAPI.
For creation of whole new tab, we need to click on source structure , at the end like below, click right

Field mapping option is to map source structure with target structure like Z Fields where we need to
select our Z Field and drag it and drop it to right hand field where green ticked are already mapped and
red ticked are yet to map.
To remove assignment, use as below.

In Field mapping, at the right corner, through double click on any structure, we could put value on it and
that will make those value as defaults. For Rules, we need abaper.

Cloud Strategies:
1. On premise
2. Public cloud
3. Private Cloud
4. HEC- HANA Enterprise Platform
5. HCP/ SCP- HANA Cloud Platform

Cloud means we are using someone machine from another machine over internet means we have
outsourced infrastructure like Gmail , yahoo mail , google drive etc. Ex Gmail, it is used by everyone and
everyone can do certain customization by their own. It is shared system and it is auto upgraded like
Gmail never ask our permission that they will do update of it or not. We could use paid service to
increase size of google drive and once we download these software, they are ready to use.
Parameters:

1.Application , 2 Data, 3 Runtime, 4. Middleware, 5. Operating system, 6. Virtualization,


7.Server, 8. Storage, 9.Networking

A. Software as a service ( SAAS)- like Gmail, google drive etc. All above are managed by vendors. S4
HANA Public cloud is also same. Everything will be provided by SAP. Ex Success Factor , Ariba.
Company needs to take subscription of it and once subscribed and then if now renewed, then SAP
will share your data in your shared path.

B. Platform as a Service – Here app and data are managed by Client itself and rest 3-9 are managed
by vendors. Ex HCP ( Hana Cloud Platform) and SCP – SAP Cloud Platform

C. Infrastructure as a service- here 6-9 are managed by vendor like Amazon Web service ( AWS) ,
Azure. S4 HANA private could uses this where client takes S4 HANA on premise license and use Cliud
Service. HEC also here. HEC means on premise S4 HANA running on SAP Cloud means database
service to SAP HANA enterprise cloud.

HCP/SCP- is a hybrid system where we bring certain data from om premise to could and build
application.

Private Cloud Public Cloud


Full ERP scope, full process flexibility , config Essential business process, config managed by
option, single subscription contract, cloud SAP, Single subscription, SAP Responsible for
enterprise support, shared responsibility , Web support, upgrades, Web only service , quarterly
and SAP GUI, annual Innovation. innovation, no 3rd party service option.
Deployment Options:

Enterprise Structure:

As per ECC system – define and assign org elements

Material Master data:

Main change here is material no which was previously 18 characters and now extended to 40 characters
but before doing that , we need to activate it. In SE11, put MARA and see as below,
Now this means we have activated field extension but question is by what digit? Now that we have to
maintain through OMSL transaction as below,
We need this extension because of 1. Automotive industry, 2. For mat, we have two types of no which is
stupid no like 1,2,3 and from this, nobody understands anything and next type of no is intelligent no
which follows certain nomenclature like Brand- Sub brand – Group – Pack size like this. It is maximum
followed by FMCG and steel industry. So to fit this nomenclature, min characters needed is 24 and if we
have this type of nomenclature, there will not be any duplication. In ECC , we use stupid no and that’s
why we need description to make understand. Tcodes used FLETS and OMSL.

If we adapt more than 18 Characters, then for Idoc , we have a new segment “ Material Long” which is
of 40 characters and ABAPers need to map this. For conversion, existing mat in ECC will come same in S4
HANA. New mat will be 40. Before activating 40 characters, , we need to check peripheral system. Please
note, we cant put any numeric no 40 times. That should be alpha- numeric. Numeric could be 18 but
alpha numeric could be 40 characters. In MM01, from Defaults, we could do changes in industry sector ,
view, hide or make default.

Now MM01:
We could see that it is a huge long name:
Now if we do MB1C
Need to do MIGO like this;
Material doc header data table- MKPF and item data table as MSEG in ECC but in S4 HANA, all these
tables clubbed into MATDOC table. All MARC, MARD, MSKU tables data are now available on MATDOC
table itself.

Now if we see MARA/ MARC etc , we will get data. Reason is that these tables are view tables only.
These tables are having data so that existing reports logic doesnot break.

Change in material ledger:

Mat ledger is used for valuation of mat in multiple currencies and costing ,here costing is based on mat
led.

Customer Master data:


Why BP:

1. Principle of one
2. Harmonization between CRM , SRM , EWM, TM, MDG system as they use BP
3. Simplified master data creation like several tcodes are not in use now, only single BP
4. Centralized and one time maintenance of master data tab like address
5. We could maintain multiple addressed in BP
6. FSCM Credit management in S4 only uses BP

In Fiori,

If we try to create customer master data through XD/VD/FD01, system will not allow it. Only BP
transaction is allowed;
If we use BP transaction, we will get screen like below,
We have two important tabs here like BP role and Grouping.

BP role gives us fields/tabs required and Grouping is strictly number range assignment basis.

First start with account group creation:


This no range will give us customer number.
Below no range will give us BP numbers.
As highlighted we could see int std grp and Ext std grp which means if we tick this, by default system will
pick this grouping if user forget to mention any particular group and hide button will make this grouping
hide as it could happen that we have created a new grouping which we are planning to use it in future.

Next step is to assign both Acc gr and BP grouping through SPRO—Cross application component
For having same no of cust and bp, we need to use this option as well as choose same no range and
assign with acc gr and bp grouping where for bp grouping no range will ne internal and for acc gr , no
range will be externally ticked.

Now we should starting creating BP,

This grouping highlighted will only be available when we create but not view or edit mode.
Now we are going edit part of it,
One more newly introduced thing is that status management which we use on documents, but from
customer, we could also use like for different status means if a customer is ready to training, training
completed like that and based on user exit, it could behave differently like blocked open etc.
Double click on each status will give us;
Click on
How to get BP and Customer relationship:

Use table BUT000 and take out BP GUID

Now use that BP GUID and use it in CVI_CUST_LINK table


For vendor we use CVI_VEND_LINK

BP Tables:

BUT000, BUT020 , BUT100

Below one is called BP_CVI linkage. In conversion from ECC , S4 HANA expects this linkage and without
this linkage , it is not possible to transform from ECC to S4 HANA. So in ECC, first we need to link all cust,
ven with BP otherwise conversion will not happen.

**** Imp- SPRO- Cross App component- Master data sync- Cust / Ven Integration – Business partner
settings – Setting for cust integration – Set BP role cat for direction BP to Cust line for us, FLCU01
maintenance makes system understand that for that BP, cust needs to be created. This is done in S4
HANA System. For ECC, we need to use “Define BP role for direction cust to BP”.

“Relationship” button is to create contact person. It is mainly used by FSCM for credit management.

Below path is same as acc grp field control. Here we do field control based on BP role.
For S4 HANA errors , use below link , under knowledge base, search for issue.

https://launchpad.support.sap.com/

In SE38, use CVI_FS_CHECK_CUSTOMIZING and this report will help us to check all prerequisite for
conversion of BP to cust and vice versa.
Inventory Change:
Tab MATDOC and MATDOC_EXTRACT

No MB1A /B / C only MIGO

Fiori App – Stock Multiple material , Stock Single Material , Inventory turnover analysis, Slow or non
moving mat , mat inventory value , goods movement analysis , overview inventory management.
Pricing changes:
Counter changes to 3 digits,

Customer pricing proc and doc pricing proc are two digits now,
PO has been changed to Cust ref
Table level change is in ECC, pricing table is KONV but in S4 HANA, it is PRCD_ELEMENTS

In ECC, we fetch Doc condition field from VBAK and use that value in KONV table to fetch all pricing
details as below,

But in S4 HANA, it been changed,


There is one prog for pricing migration from KONV to PRCD_ELEMENTS only if that system is migrated
from ECC to S4 HANA
Credit Management:
If we do credit management like ECC, we will find like below,

It is now moved to FSCM

Credit segment is highest here

Before doing anything, we need to deactivate so that system will proceed with FSCM credit
management:
Now doing assignments:
Need to complete these assignments:

We also need to create below highlighted,


1st rating procedure:
Both result types will be in use;

By clicking formula editor, we could do our own logic,


Now we create new rule
This is automatic credit management screen,

For customer credit, we need to use UKM000 role or UKM_BP,


We could do a simulation using above highlighted tab,
For releasing credit block doc, we could use VMK1 but not VKM3. In S4 HANA, we have UKM_MY_DCDS
below one,
Conversion from ECC to S4 HANA

Possible with SCM-DMO tool. FD32 data change to BP data and done by FI consultant only.

Fiori- use search term Credit in Fiori

OUTPUT:
From 1610, SAP supports both BRF+ and NACE for output.

Now if Customers want to move to BRF+ , then we need to first activate that BRF+ setting like below,

 SPRO
 Cross Application Component
 Output Control
1. Manage App obj type activation is to activate NACE or BRF+ decision

Only these objects are allowed,

2. Define output type is for assignment with output type with object
3. Assign output channel:

4.Assign form template:

Generally S4 system supports adobe but we could change it to smartform or sapscripts from here.

There is one tcode BRF+

We need to first do BRF+ tcode and upload forms relevant to billing doc and this step is mandatory and
strictly done by technical team.

SAP NOTE 2248229- here will have set of XML file like below,
Like for Del it is- OPD_V2_DELIVERY_OUTPUT

Below ex of some files for PO, excise invoice etc.

If you try to open these XML do like below like here it is billing doc,

We could import this XML into BRF+


To import that, first click there,

We get

User mode expert,


otherwise we cant import XML file,
Select XML Import which is at last and we need to choose which file need to be uploaded,

After that we need to select customizing request,


Like below we could do a search and choose TR

Then we could do a test run,


By clicking upload XML File
For success we get

Once imported, it will be


This XML file uploaded here is a representation of config associated

For BRF+, we give requirement of what are parameters should we use for output and based on that it
will be created by technical person and we test it after maintaining condition record in tcode OPD.

If we get below issue in OPD tcode, means we did not do BRF+ for that set.

We always need XML to upload. BRF consultant could do that based on our requirement.

Below combination like billing type, sales org, cust grp are actually created by BRF consultant based on
our req.

These combination could be changed by table settings.


Now to add value here, click on edit,
Always activate it

We need to maintain all drop down as per need.


How to change table content:

In edit mode, click on table setting,


Select and do OK.

BRF+ supports ADOBE form but sap script or smartforms could be used
ATP Change:
In S4 , we have both ATP and aATP. In aATP, we have product availability check , new backorder
processing (BOP), New product allocation , release for Delivery (RFD). In aATP, these are advanced tools
and all are Fiori based, not Tcode based.

Only single record

Table only VBBE, not VBBS.

PAC- Product availability check uses ATP. It is on sales ord, prod order level. Also BOP uses this one.
ATP means receipts of material minus demand of material on a time frame. ATP only confirms that on
requested date, system will be able to deliver quantity to its customer or not where as MRP creates pur
ord, purr eq for that incoming mat flow. ATP only for confirmation whereas MRP is for planning. Now
huge positive ATP means huge stock and less sale and negative ATP means we have demand but no
sales. So ATP should be optimal.

ECC having ATP but S4 HANA having ATP+ aATP

Customer using ATP in ECC, could adopt aATP but cust using ECC and APO-GATP , they should connect
their S4 HANA system to APO-GATP system and start using it as GATP functionalities are not available in
aATP. APO will be replaced by IBP and S4 HANA by 2025.

Impact of aATP in Sales order is same as ATP. Everything is same but if we activate aATP, then
performance will be high, system will take less time to calculate ATP. In checking grp, we need to
activate it.

ATP-BOP:
Based on delivery priority in customer master, system does backorder processing in ECC. In existing BOP,
we can’t build logic on our own, we always need ABAPer to do it. In Advanced BOP , we have different
strategies and based on that , we could do our BOP. Here key users could write their own business logic
in segment and there is no need to do complex config and enhancement like ECC. BOP could not work if
sales order has been converted to delivery.

New Configuration strategies in BOP in S4 HANA:

Here we write logic and based on logic, sales order will fall any of these mentioned below. When our
demand is greater than supply, we need BOP.
Win Gain Redistribute Fill Loose

Confirms as req, shall be Improve if possible, Redistribute and Delivery Delivery


fully confirmed in time as shall keep reconfirm- might confirmation if confirmation shall
they are for imp custs confirmation and gain or loose required, for non loose all
should gain if priority custs confirmation like
possible credit blocked orders

Stock transfer order and Sales order mainly use BOP but not PO.

 In Win category SO , if it is partially confirmed , it will try to take quantities from other SO of
other rest above mentioned categories in table and try to confirm that SO first and no other SO
could take quantities from SO of this win category. System always try to confirm this SO full
quantity and in time.
 Gain – system will confirm quantity but may be not in time , may be future time. This is less
priority than win. It could only take quantity.
 Redistribute – it could give or take quantities from other SO.
 Fill- Only give , not take. Based on req, it gives.
 Loose – always give and after BOP, it will always be zero.

These strategies are dynamic in nature means it could be changed its bucket but in ECC BOP , it is always
static and based on delivery priority in customer.

Ex of business logic is – Req del date is today + 3 months. Now based on this type of logic, SO puts in any
of these 5 buckets.

Config:

Complete below all steps just like we do it in ECC;


OR
Here we need to activate aATP and now based on its activation, system will check logic and scope. Rest
configs are in Fiori app. In Fiori app, search for BOP and we will find couple of app related to BOP.
1. Configure BOP segment- to write business logic like our own requirement like requested del
date greater than 30 days etc.
2. Configure BOP variant – scheduling batch job. Here we have strategies like win, lose etc. means
those 5 strategies. Here we assign business logic with strategies.
3. Schedule BOP run – for scheduling
4. Monitor BOP run – reporting type , to monitor process of BOP
5. Configure custom BOP sorting- optional one.

Now one by one, we will configure imp steps:

Configure BOP segment-

Click on create,

We are putting segment name and description and now in requirement filter , put logic like del date of
ATP doc ( which will come automatically) . When we put initial “D” , system will start suggesting all logic
starting with word “D” like below, this automatic suggestion coming from HRF- HANA Rule Framework
activated by Basis team. If we cant see this type of suggestion , we need to tell basis to activate HRF.

After our first phase, give a space and there system will show us that value should be equal to , not
equal to or great than as below and based on that we need to select.
After that again put space which will give us options like is it today or later date etc. as below.
So we could say that to write a logic , we could use

D which will give us all phrase starting with D like delivery date of ATP doc then press space which will
give us options like equal to / greater than / less than etc then again press space which will give us
options like what date then again space which will give us + / - etc
Final logic where I have manually put 30

Based on this logic, system will capture all SOs.

Below priority section, use + and we could put which order should come first. Tick means attributes as
priority one.
Now go back and click on Configure BOP variant , click on create.

Give name , descriptions, and applicable for SO or STO or both


This checking as option, we get from below path. Exception behavior means what will happen if there is
any exception and Fallback variant means for exception, it will follow this variant.
Now close this,

Now in segment name, put our segment which we have just created and right top in drop down, we
could see strategies where we want to put our segment with our own logic.

Like that putting segments, we have created our all 5 strategies like below.
We run it now in simulation mode like below,

In monitor BOP run, we could see as below,


Schedule BOP Run app is for setting up batch job.

Product Allocation:
In ECC, we have 15-20 steps but in S4 , we have only 4 steps with Fiori with no TR generation. At time of
ATP checks , this product allocation is called and system checks that product allocation quantity with
required quantity. Schedule line quantities will be based on product allocation.

We need to activate Product allocation by below path:


Configure Product allocation- Click on + sign
We put below data:

Below we are adding characteristics means by which level.

Next step is Manage Production allocation Planning data, where we need to select our object and
double click it.
Here we are putting that combination which we have just maintained and putting exact value of material
here.

Next one is Manage Product allocation sequence ;


Backward and Forward consumption as 1 means based on these data, on particular date , system will try
to confirm sales order quantity from one backward and one forward. If these are not maintained , then
system will not check one forward and backward and it will not confirm any quantity in SO. This forward
and backward mean one schedule line will confirm from past and one from future. “ Past period
allowed” means system can check past month for quantities. Constraint means restrict or unrestrict.

Next one is “Assign product to product allocation”


We maintain material and plant combination over here.

Next is Product allocation overview which is a reporting,

Release For Delivery (RFD)


This is under aATP. RFD is a Fiori app which is use to do manual allocation. We also able to create
delivery from RFD. It is used for last minute order coming , urgent deliveries with external priority. That
time, we cant change BOP strategies and then we use RFD. For urgent orders , system will not check and
incoming PO / PR / Stock , it will check other orders and will take quantities from that. Order with
Product allocation cant be changed.

To do it, first in Fiori, search for Configure order fulfillment responsibilities


Use + , provide name, description and responsibility definition, put our own logic ( Just like we done in
BOP)
Also assign user here and save.

Now search for Release for delivery app in Fiori,

Select mat and click on release for delivery. Also from here, by double clicking on material, we could see
from which sales order, we could take that material and then do release for delivery. For any urgent SO,
it will take quantity from non urgent SO and confirm that urgent one and that SO will be delivered.
Fiori App for SD:
Create / Change / Display / List enquiries, Manage Sales enquiries, list of incomplete sales enquiry, and
same like quotation, contract, sales order, track sales order, sales order fulfillment, my sales overview,
manage outbound delivery, track sales order, sales volume.

Settlement Management:
This is rebate perspective. Suppose we maintained that we will be paying customer 2% when customer
done business greater than 1000 USD. Now cust has done 2000 USD so 2% of 2000 USD means 40 USD is
not our profit. We put it in a separate kitty acc to accumulate. So under that certain time period, we use
to accumulate that 2% into kitty acc that is called accruals.

In S4 HANA, it starts with condition contract WCOCO where we maintain cust mat volume condition
settlement calendar then release contract. Then we do SO- Del-PGI-Invoice. System checks invoice for
settlement – could be partial final delta settlement. We could do it in two ways, one is totally
independent and another way is we put accruals condition type in pricing procedure and that accrual
condition type is also part of condition contract(WCOCO). But this is not mandatory. Here with each
invoice, system will do accruals.

Fiori App – manage cust condition contract , extend condition contract.

In ECC, SAP Maintain VBOX table for all documents irrespective of retro active or not for which rebate
could be applicable.
System could use Variable key to track all invoice where rebate could be applicable. For retro active
rebate, system use VBOF tcode where it looks this VBOX table details to process retro active rebate by
reprocessing and repricing that doc. Repricing means stat condition of rebate will be added and
accumulated. But VBOX is index table and in HANA, we don’t need any index tables for processing.

Configuration:

Path is SPRO- Log gen – Settlement mgmt.- Condition contract mgmt.- condition contract condition-
sales
This is just like our ECC Rebate. Here condition tables where we maintain condition record assigned to
condition type and that con ty assigned to condition type grp which is assigned to condition contract
type.

Now under sales, we define condition type RES1 as rebate condition type.

Next thing is Specify CC determination relevance and copy control for condition type
Here we define that doc type is relevant for condition contract or not as well as copy control activation
with usage.

Next is Define condition type grp where we create our own.

Next step is to assign condition type to condition type grp.


Next step is assign condition type grp to condition contract type by below path,
To add more fields in below business volume section, use path-
Through tcode WCOCO
Need to put cus , mat and validity period and in condition table need to choose REBA.

We need to release it then by clicking green flag.

Process is so-del-pgi-bill

Now at last use tcode WB2R_SC and after that need to go to wcoco with that contract no and we could
found that credit memo is already created for that.
Below link is from good blog related to settlement management:

https://blogs.sap.com/2017/02/17/settlements-management-in-s4-hana/

Steps to create Condition Contract

1. Execute transaction WCOCO ( W ko-ko ) and click on ‘Create’.


2. Select the type of contract you would want to create

3. Enter the customer and period for which you want to create the condition contract.

4. Enter the Sales data on the 2nd tab

5. Enter the business volume selection criteria. This steps defines which invoices are relevant
for application of this condition contract.

6. Enter the settlement Material


7. Enter the Settlement Calendar. This step defines when you would like to carry out
partial/final settlements.

8. Create Rebate Accrual condition

9. Change condition Table

10. Create Rebate condition

11. Save the Condition Contract


12. One thing is of utmost import is to release the contract. Hence release the contract in
WCOCO

13. Once done the light will become ‘Green’. Save it.

Sales Flow

I won’t be explaining how to create a Sales document, Delivery, PGI, Invoice etc but
what needs to be mentioned is that either these documents can be created before the
condition contract creation or after the condition contract creation. In case Sales
document is created after the contract creation, REBA accrual condition would be
available in the conditions tab.

Don’t worry about the 7 % as opposed to 10 % I created earlier as this is from a


different example.

In case the Sales document is already created before the contract creation, the rebate
would be provided as part of the settlement process and rebate condition would be
visible in Credit memo which will book the correct accrual amount to the respective
accounts. So there is no need to adopt the Sales order anymore.

Actual Settlement
Run transaction WB2R_SC and enter relevant values. Please note that Run Type
‘Check Run’ does not post any data. For actual settlement use ‘Live Run’.

Once executed, you will get a message like shown below

Credit Memo

In WCOCO, you can see a credit memo created with relavent values
In the Header conditions, you will find the Rebate condition we had created as part of
the condition contract.

Business Volume

Run transaction WB2R_BUSVOL,


As you can see below, business volume 1.5 is updated against the condition contract

Imp Reports:
Important Transactions

There is a whole list of important transactions for the new Settlement management
solution.
Conversion from ECC rebate to S4 HANA Settlement:

There is no conversion from rebate to settlement but rebate data and agreement could come to S4
system and we would be able to settle down in S4 HANA system but we cant extend old rebate or create
new rebate in S4 HANA. We could only do creation of condition contract instead of rebate.

Conversion Process:
Always remember that ECC to S4 Hana movement is possible when we have ECC 6.0 version. If it is less
that ECC 6.0 then we need to make it ECC 6.0 and then have to do transition.

Conversion Process is having 5 phases,

1.Preparation Phase:

Here we use certain outside tool called Maintenance Planner , Custom code analyser.

Maintenance planner will tell us that which SAP standard process are not supported and Custom Code
analyzer will tell us that which zcodes will not work in S4. U will get report called pre check reports
having all these details. Now its ur decision to go for S4 HANA.

When you plan to go for S$ HANA then you need to convert all customer and vendor to BP or else after
moved to S4 , those cust and ven will not be there.
Below step to move to BP,

Cross Application Component—


This process is strictly on ECC only to convert cust and ven to BP.

2. Installation: Using Software update manager- SUM.

3.Delta Config

4.Delta Migration

5.Post Migration

This is called migration cocpit.

For simple logistic point , everything is done and all we need to do it for s4 finance.

But for data migration , we need to migrate data for credit mgmt. by doing this,
SAP FIORI:

Link:

https://fioriappslibrary.hana.ondemand.com/sap/fix/externalViewer/
Here we need to search
Click on

This role should be assigned to PFCG so that user would get access for this app. For FIORI technical
consultant, functional consultant needs to provide these below informations:
Below is SAP FIORI
Through App finder we could find apps assigned to us based on our role.

Select your required app and add it


Through setting tab, we could control below fields,
From edit home page , we could create own group and tiles in it like below;

This is typically how FIORI Looks like with three types of app like reporting, transactional and fact sheets.
Below we could see how reporting are getting fetched in each tab

At the end, we may be concern about which tables we need to keep in mind that they have been
changed and no use or only in view only for that we have a solution,

For example, we have table of KONV which is pricing table which has been changed to PRCD_ELEMENTS

Now put KONV in SE11


If it does not have physical table associated with it, it means we need to concern about this table.
Index tables are of no use, only view purpose so that existing functionalities does not affect

VA/VL/VFKPA

VA/VL/VFPMA

MRP and MRP LIVE:


It is a tool for right mat right quantity right time at right place. It is a engine. For a particular
requirement, MRP checks receipts like pur Ord, purr eq prod Ord and demand like SO STO etc and in
hand existing stock. Then if there is shortage , then it will send proposals for in house creation planned
order , for external pur req.

Issue with MRP Tool Run:

1. Night batch job runs only as it is time consuming


2. Performance issue
3. It works 365*24*7 thus does not consider capacity constraint
4. Not handy
5. Dependency in correct master data
6. Static tool as it does not respond or change based on situation
7. Only single plant based planning, no cross plant planning
8. No inventory optimization
9. No backlog planning

As MRP is very time consuming and static in nature, companies use it to run at night batch job that’s
why in S4 HANA, we got MRP Live. Advantage of MRP Live is it runs always. In case of bulk orders, if user
wants to check MRP, he could do that immediately as it is live. Old MRP (MD01 / MD04 / MD05)
requires 3-4 hrs. with lots of resources. MRP Live is very fast due to in memory. MRP Live could do cross
plant planning which old MRP can’t do.

Now take ex of plant A, B and C where sugar is produced in A, packaged in B and sell by C and Salt is
produced in C, pack by B and sell by A. In traditional MRP, plant sequencing would be either A-B-C or C-
B-A means in any one direction. But MRP Live could do it in both ways. Both MRP and MRP Live, these
happen in plant level but old MRP they don’t have option for inventory optimization but in S4, through
DDMRP, this option is there. DDMRP gives how much buffer stock you need to keep so that there would
be no shortage. For DDMRP, our MRP type for mat will be D1. In new MRP Live, we have embedded
PPDS, so no need to run PPDS separately like we do in ECC. In mat master, we have a new tab of APO
which actually triggers PPDS automatically.

In classic MRP, user use MD01 which flows through ABAP layer and then database. In ABAP layer, based
on algorithm, data fetching summation, calculation happens. Because of database and ABAP Layer
interactions, it takes a huge time. As traditional DB is very slow due to disk drum system. That’s why old
MRP slow, static, need batch job. Now in S4 HANA Live, it happens all in HANA DB in memory and also
logic is written in DB level. So that’s why it is too fast. Whole algorithm and logic in HANA DB plus all
tables are there. So it’s fast. Now in S4 HANA, we have both MRP and MRP Live. Classic MRP logic still in
app layer and MRP Live in HANA DB layer. So MD01 will trigger old MRP and MD01N triggers new MRP
Live. Now why do we still need classic MRP while we have new MRP Live? For material with
enhancement lying in ABAP Layer and logic in HANA DB layer so it can’t calculate that material. In classic
MRP, MD01 , we have only plant and MD02, one plant and one mat
MRP Live MD01N, many plant and material.

If we run MD01N, we find mat planned, mat failed, mat with old MRP, PPDS relevant mat etc.
In S4 HANA, we could manually tick for Classic MRP through t code MD_MRP_FORCE_CLASSIC

In SA38, use prog PPH_CHECK_MRP_ON_HANA to see which mat are in classic and which are MRP Live
Fiori App- Schedule MRP Run

DDMRP- A new MRP with MRP type D1 , newly introduced in 1709 and we assign this D1 in mat master.
DDMRP is plant specific.

In 1709, in SM30, put PPH* and then F4 and we will get these below 5 tables which we need to maintain
for DDMRP.
1809- we have Fiori based.

PPDS: Production planning and detailed scheduling, previously with APO. It is planning engine. PPDS is
just like MRP as it will check with existing stock with incoming and outgoing stock details. If shortage,
system generates purchase order and purchase requisition. Through PPDS, we do planning and those
planner moves to scheduling board. It is almost MRP like. In MRP , we have few algorithm but in
PPDS ,along with MRP algorithm , we have additional algorithm known as heuristic. PPDS is advanced
from MRP. Mat could either managed by PPDS or MRP. APO- PPDS cant move to S4 PPDS. So cust using
ECC with APO, for them , they have to convert ECC to S4 but after that, APO should be linked with S4
HANA. They should not use S4 – PPDS as this is not having all functionalities like APO- PPDS.

You might also like