Showing posts with label Interview Questions. Show all posts
Showing posts with label Interview Questions. Show all posts

SAP HANA: Interview Questions 131 through 140

131. Explain the process flow for modeling within SAP HANA?
On a high level, the process flow for modeling with SAP HANA works like this:

  • Import Source System metadata
    • Physical tables are created dynamically
    • 1:1 schema definition of source system tables
  • Provision Data
    • Physical tables are loaded with content
  • Create Infomation Models
    • Database Views are created
      • Attribute Views - Separate Master Data from Fact Data
        • Build the required master data objects as Attribute views
        • Join text tables to master data tables
        • If required, join master data tables to each other
          • Example, join Plant to Material 


      • Analytic Views - Create Cube-like view by joining attributes view to Fact data
        • Build a Data Foundation based on transactional tables
          • Select Measures (key figures)
          • Select attributes IDs (docking points for joining attributes views)
        • Join attribute views to data foundation
          • The Analytic View is a kind of star schema
      • Calculation Views - If joins are not sufficient, Create a Calculation View that is something that looks like a View and has SQLScript inside
        • Composite view of other views (tables, re-use join, OLAP views)
        • Consists of a graphical and script-based editor
        • SQLScript is an HANA-specific functional script language
          • Think of a 'SELECT FROM HANA' as a data flow
          • JOIN or UNION two or more data flow
          • Invoke other functions (built-in CE or generic SQL)
      • Analytic Privileges
        • Used for row-level security
        • Can be based on attributes in analytic views
        • Can specify values for a particular role 


  • Deploy
    • column views are created and activated
  • Consume
    • consume with client tools - BICS, SQL, MDX
132. Explain the main steps to create an Attribute view?
Attribute View

  • Attributed add context to data
  • Can be regarded as Master Data tables
  • Can be linked to fact tables in Analytic views 
Steps:

  • Set Parameters
    • Enter a unique name (technical name) and description
      • The allow characters are: capital letters (A-Z), numbers(0-9), and Underscore(_).
      • There is no multi-language support for Description metadata
    • Select View Type
      • Attribute View
    • Select Subtype
      • Standard
      • Time - a time view created a master data view for time characteristics (calendar year, quarter, month, week, day, fiscal year/period, etc)
      • Derived (Read-only) - is a linked copy that allows usage of two Attribute views with exactly the same definition in the same Analytic or Calculation view.
        • An Attribute View cannot be used more than once in another view. If needed, you can create one or several Derived Attribute Views.


      • Copy From feature can be used to create a new Attribute view (independent) from an existing Attribute view (as a template). 


  • Table Selection
  • Table joins and properties
  • Select Attributes
  • Create Hierarchies
  • Save and Activate
  • Data Preview

SAP HANA: Interview Questions 121 through 130

121. What is a Materialized view or Materialization?
In computing, a materialized view is a database object that contains the results of a query. 
  • For example, it may be a local copy of data located remotely, or may be a subset of the rows and/or columns of a table or join result, or may be a summary using an aggregate function.
The process of creating a materialized view is sometimes called Materialization, which is a form of caching the results of a query.

122. What is SAP Information Composer?
SAP Information Composer is a web-based application that allows business to upload to and manipulate (clean, and model) data on the SAP HANA database.

The SAP HANA information composer uses a Java server, which interacts with the SAP HANA database. The Java server communicates with the SAP HANA information composer client via HTTP (port 8080) or HTTPS ( port 8443).

The SAP HANA information composer client is accessible to users who are assigned the IC_MODELER role. This role allows users to upload new content into the SAP HANA database and to create physical tables and calculation views.

123. Explain Infomation Models?
Information Models are multiple database views of transactional data stored in the physical tables of SAP HANA Database used for Analytical purposes. 

Analytical Data Modeling is only possible for Column Tables i.e. Information Modeler only works with column storage tables.

For that reason, Replication Server creates SAP HANA tables in column store by default. 
Data Services also creates target tables in column store as default for SAP HANA database. 

The SQL command to create column table: "CREATE COLUMN TABLE Table_Name..". 
Also, the data storage type of a table can be modified from Row to Column storage with the SQL command "ALTER TABLE Table_Name COLUMN".

We can choose to publish and consume SAP HANA tables data at four levels of modeling using SAP HANA Studio Information Modeler Perspective, which is basically the combination of Attributes and Measures:
  • Attribute View, 
  • Analytic View, 
  • Calculation View, and 
  • Analytic Privilege. 
124. Explain Attributes and Measures?
Attributes are individual non-measurable analytical elements. Attributes add context to data. 
These are qualitative descriptive data similar to Characteristics of SAP BW. 
  • For example, MATERIAL_NAME. 
There are three types of Attributes in Information Modeling:
  • Simple Attributes are individual non-measurable analytical elements that are derived from the data foundation. 
    • For example, MATERIAL_ID and MATERIAL_NAME are attributes of a MATERIAL subject area.
  • Calculated Attributes are derived from one or more existing attributes or constants. The attribute is based on static value or dynamic calculation. 
    • For example, extracting the year part from the customer registration date, assigning a constant value to an attribute which can be used for arithmetic calculations.
  • Private Attributes are used to model Analytic Views and cannot be used outside the view. Private attributes add more information to the data model. Private attributes of Fact tables are used to link to the subject area or dimensions i.e. Attribute Views. 
    • For example, we create an analytic view ANV_SALES to analyze the sales of materials and select MATERIAL_ID as a private attribute from the database table SALES_ITEM. In this case, MATERIAL_ID could be used only for modeling data for ANV_SALES. 

Measures are simple measurable analytical elements. Data that can be quantified and calculated are called measures. They are similar to Key Figures in SAP BW. Measures are defined in Analytic and Calculation Views. 

Three types of measures can be defined in Information Modeling:
  • Simple Measures is a measurable analytical element that is derived from the data foundation i.e. defined in the fact table. 
    • For example, SALES_AMOUNT.
  • Calculated Measures are defined based on a combination of data from OLAP cubes, arithmetic operators, constants, and functions. 
    • For example, Net Revenue equals Gross Revenue - Sales Deduction, assigning a constant value to a measure for some calculation.
  • Restricted Measures are used to filter the value based on the user-defined rules for the attribute values. 
    • For example, Gross Revenue of a material for country = US.
125. Explain Table Joins and Properties?
The various Join Types available while modeling Attribute Views are 
  • Referential, 
  • Inner, 
  • Left Outer,
  • Right Outer, and 
  • Text Join. 
Apart from that, the Join Condition and Cardinality ( 1:1, 1:N or N:1 ) needs to be defined accordingly. If we select the Join Type as Text Join, then we need to define the Language Column and Description Mapping.

The Output structure of the Attribute View must be explicitly defined. At least one Key Attribute is mandatory. However, any number of Non-key Attributes may be defined. We can also apply static Filter values ( List of Values ) on any columns of the tables selected in the Attribute View. Also, this column does not need to select a Non-key Attribute for output.

126. Explain Hierarchies?
Hierarchies are used to structure and define the relationship between attributes of an Attribute View that are used for business analysis. Exposed models that consist of attributes in hierarchies simplify the generation of reports. 
  • For example, consider the TIME Attribute View with YEAR, QUARTER, and MONTH attributes. We can use these YEAR, QUARTER, and MONTH attributes to define a hierarchy for the TIME Attribute View. 
Two types of hierarchies are supported in Attribute Views of Information Modeler:
  • Level Hierarchy is rigid in nature, where the root and the child nodes can be accessed only in the defined order. This needs one attribute per hierarchy level and the number of levels defined are fixed.
    • For example, COUNTRY, STATE, and CITY.
  • Parent/Child Hierarchy is very similar to BOM ( Parent and Child ) and Employee Master ( Employee and Manager ). The hierarchy can be explored based on a selected parent, and there can be cases where the child can be a parent. This hierarchy is derived based on the value. Variable number of levels for sub-trees within the hierarchy is possible. 
    • For example, EMPID, and MGRID.
At present, Hierarchies defined in the Information Modeler are only accessible via MDX. Which means that at present such hierarchies can only be used from MS Excel.
127. Explain Time Dimension Attribute View?
Two types of Time Dimension Attribute Views are supported in Information Modeler. 
  • For Gregorian type Time Dimension the data is stored in _SYS_BI.M_TIME_DIMENSION. 
  • For Fiscal type Time Dimension data is stored in _SYS_BI.M_FISCAL_CALENDAR. 
Time Dimension Attribute Views are very often used while defining the Analytical View.

128. Explain Data Foundation & Logical View?
In the Data Foundation tab we need to select the physical fact table. Next we define the and Measures of the Fact table. We must define at least one Attribute and one Measure. In the Output structure the attributes of the fact table will appear under the Private Attributes as these as related only with the fact table. Optionally we can apply static Filter values on attributes of the fact table. We can also define Calculated Measures or Restricted Measures while designing the data foundation. Optionally we can also join database tables. We can select attributes from several tables but they must be joinable. But we can select measure from only one table( transactional data ).

In the Logical View tab we can join as many Attribute Views from any package to the Data Foundation. Attribute views are joined to the Private Attributes of the Data Foundation. Typically we include all key attributes of the Attribute View in the join definition. The default join type is Inner Join and the default Cardinality being N:1.

The foundation view shows the physical table with all fields that can be incorporated in to the final model. The logical view displays only those fields which have been selected to be included in the data model including the restricted and calculated measures defined.

130. Explain Attribute Views, Analytic Views, Calculation Views, Analytic Privilege, Package, and Procedure?
Attribute Views are the Reusable Dimensions or subject areas used for business analysis. Attribute Views are defined in Information Modeling to separate Master Data Modeling from Fact data. 
  • Examples of Attribute Views can be Customer, Material, Time. 
We define the Key or Non-key Attribute of the physical database tables of Master Data. We can join Text tables to Master data tables or two Master data tables like product and product group. Also, tables can be selected from multiple schemas and are not restricted to one schema per Attribute View. Activated Attribute Views can be consumed for reporting or can be linked to the fact tables in Analytical Views.

Analytic views are the Multidimensional Views or OLAP cubes. Analytic Views are used to analyze values from a single fact table of the data foundation based on the related attributes from the Attribute Views, looks very similar to Star Schema. We create a Cube-like view by joining Attribute Views to the Fact table data. 
  • For example, total sales of a material in a given region at a given time.
Calculation Views are used to create data foundation using database tables, Attribute Views, Analytic Views, and Calculation Views to address a complex business requirement. 

If joins are not sufficient, we create a calculation view with SQLScript. Also, Calculation Views are required if the Key Figures span across tables. 

A Calculation View is a composite column view visible to the reporting tools. When the view is accessed a function is implicitly executed. Calculation Views can be modeled via Graphical or SQL Script. Calculation Views support UNION. 
  • An example, comparing the sales of a material in a particular region for the last two years.
Analytic Privileges defines Privileges to partition data among various Users sharing the same Data Foundation. Analysis Authorizations for row-level security can be based on Attributes in an Analytic Views. 

The SAP HANA database supports Analytic Privileges that represent filters or hierarchy drill down limitations for analytic queries. 
Analytic Privileges grant access to values with a certain combination of dimension attributes. 
  • For example, if we want to restrict access to a cube with sales data to values with dimension attributes of region = US and year = 2010. 
As Analytic Privileges are defined on dimension attribute values and not on metadata, they are evaluated dynamically during query execution.


SAP HANA Packages are used to Group various related information objects in a structured way. Attribute Views do not need to be in the same package while defining Analytic View in some other package. Packages do not restrict access to Information objects for Modeling.

SAP HANA Database Stored Procedure defines sets of SQL statements that can process data. These follow the same constructs like T-SQL of Microsoft SQL Server or PL/SQL of Oracle database.

Notes: 
  • On Activation of the Information Models database Column Views are created in the schema _SYS_BIC. These Column Views can be accessed from reporting tools. 
  • Analytic Views do not store data. Data is read from the joined database tables. 
  • Joins and calculated measures are evaluated at runtime.
  • So typically while designing Information Models we will start with Creating a Package. Next we will design Attribute Views, Analytic Views and Calculation Views within the Content Package.

SAP HANA: Interview Questions 111 through 120

111. Explain Persistence Layer within SAP HANA?
  • The persistence layer of SAP HANA relies on Data and Log volumes. The in-memory data is regularly saved to these volumes.
    • Data:
      • SQL data and undo log information
      • Additional HANA information, such as modeling data
      • Kept in-memory to ensure maximum performance
      • Write process is asynchronous
    • Log:
      • Information about data changes (redo log)
      • Directly saved to persistent storage when transaction is committed
      • Cyclical overwrite (only after backup)
    • Savepoint:
      • Changed data and undo log is written from memory to persistent storage
      • Automatic
      • At least every 5 minutes (customizable)
  • Data and log volumes are used as follows:
    • On a regular basis, data pages and before images (undo log pages) are written in the data volumes. This process is called a Savepoint.
    • Between two savepoints, after images (redo log pages) are written in the log volumes. This is done each time a transaction is committed.
    • Shadow paging is used to undo changes that were persisted since the last savepoint.
      • With the shadow page concept, physical disk pages written by the last savepoint are not overwritten until the next savepoint is successfully completed.
      • Instead, new physical pages are used to persist changed logical pages.
      • Until the next savepoint is complete, two physical pages may exist for one logical page:
        • The shadow page, which still contains the version of the last savepoint.
        • The current physical page which contains the changes written to disk after the last savepoint.
112. Explain System Restart procedure?
  • After a restart, the system is restored from the savepoint version of the data pages.
    • Note that all data changes written since the last savepoint are not restored.
  • After the savepoint is restored, the log is replayed to restore the most recent committed state.
  • The system restart includes the following actions:
    • Restore data
      • Reload the last savepoint
      • Seach the undo log for uncommitted transactions saved with last savepoint (stored on the data volume) and roll them back
      • Search the redo log for committed transactions since last savepoint (stored on the log volume) and re-execute them
    • Load all the tables of the row store into memory
    • Load the tables of the column store that are marked for preload into memory
      • Note: Only tables marked for preload are loaded into memory during startup.
      • Tables marked for loading on demand will only be loaded into memory at first access.
113. How do you identify a table's storage type?
From the HANA Studio, expand Catalog, right-click on a table and select Open Definition.
The type of the table will be shown within the "Type" dropdown.


114. Explain CO-PA (Control - Profitability Analysis) Accelerator?
Profitability Analysis (CO-PA) Accelerator assists in evaluating the market segments, which can be classified according to products, customers, orders or any combination or these, or strategic business units, such as sales organizations or business area, with respect to your company's profit or contribution margin.

The aim of the system is to provide the sales, marketing, product management, and corporate planning departments with information to support internal accounting and decision-making.

115. What forms of Profitability Analysis are supported?
Two forms of Profitability Analysis are supported: 
  • Costing-based: is a form of profitability analysis that groups cost and revenues according to the value fields and costing-based calculation approaches. 
    • guarantees access at all times to a complete, short-term profitability report



    • this method emphasizes on matching the revenues for goods and/or services provided (the value that a company gains as a result of sales) against the related expenses for those items (the value that is lost when products are transferred out of the company). 
    • this accounting method displays profit and loss information in a manner optimized for conducting margin analysis, and as such it is optimal for the sales, marketing, and product management areas.



  • Account-based: is a form of profitability analysis organized in accounts and using an account-based valuation approach.
    • uses cost and revenue elements
    • provides a permanently reconciled with financial accounting report



    • this method emphasizes on summarizing the activity and situational change over a period of time, for a given organizational unit. 
    • this accounting method presents the revenues and primary expenses that have been incurred during a given period of time and the changes in stock value levels, work-in-process, and capitalized activities. As such, it is optimal for the production and profit center areas. 
    • Profitability Analysis (CO-PA) calculates profits according to cost-of-sales method of accounting. ProfitCenter Accounting (EC-PCA), on the other hand, supports both period accounting as well as the cost-of-sales approach. 


116. Explain Market segments and Performance figures?
  • Market segments are normally some combination of information regarding customers, products, and the selling organization. 
  • Performance figures are normally measurements of quantities, revenues, discounts, surcharges, product costs, margins, period costs, etc.
117. Explain Sales Quantity, Net revenue, Contribution margin I, Contribution margin II, Contribution margin III, and Operating profit?
  • Sales Quantity
    • Sales Revenue
    • Customer discount
    • Sales commission
    • Direct sales costs
  • Net revenue
    • Direct material costs
    • Variable production costs
  • Contribution margin I
    • Material overhead
    • Fixed production costs
  • Contribution margin II
    • Variances
  • Contribution margin III
    • Overhead costs
  • Operating profit
    • Success of Marketing ActivitiesStudy the success of the most recent sales promotion for a product line
    • Revenue and Cost StructureStudy the impact of a pricing strategy for a group of customers.
118. Define operating concern?
In order to use Profitability Analysis (CO-PA), you have to define operating concerns. An operating concern is an organizational unit in Financials.
The structure of an operating concern is determined by:
  • Analysis criteria (characteristics, or attributes) and
  • The values to be evaluated (value fields) (only in costing-based Profitability Analysis).
  • G/L accounts (only in account-based Profitability Analysis).
119. How to find the tables and their dependencies for all of the ERP applications?
Using the Transaction code SD11 one can find the tables in ERP and their dependencies.


120. Explain Segment table, Segment level, Actual Line Item, Plan Line Item, Characteristics, and Summarization levels?

  • Segment Table: Dimension table
  • Segment Level: Fact table
  • Actual Line Item: 
  • Plan Line Item:
  • Characteristics: Attributes
  • Summarrization Level: Aggregation



SAP HANA: Interview Questions 101 through 110

101. How to add a system to the SAP HANA Studio?
  • Manually: With this method, you add one SAP HANA system at a time.
    • In the Systems view, right-click any blank area and choose Add system...
    • Fill in the server name, instance number and system description, and choose Next.
    • Fill in the user name and password, and choose Finish.
  • By importing a Landscape: This method allows you to connect to several SAP HANA systems at the same time, by importing an xml file generated previously by a landscape export from the SAP HANA Studio installed on your computer or another one.
    • Choose File → Import and choose SAP HANA → Landscape.
    • Specify the landscape file location, the destination folder for the import, and choose Finish.
    • Note: The landscape xml file does not contain any password. You will have to specify the user and password for any system added to the Systems view.
102. Explain the System View?
The Systems view lists all the systems that have been registered (manually, or by a landscape import).

  • Catalog: contains tables, views, indexes, etc, which are organized into schemas within packages.
  • Content: contains HANA-specific modeling objects
    • The physical tables are the only storage area for data within SAP HANA.
    • All the information models that will be created in the modeler will result in database views.
    • As such, SAP HANA does not persist redundant data for each model, and does not create materialized aggregates.
  • Provisioning: The Provisioning folder is essentially related to Smart Data Access, a data provisioning approach in which you can combine data from remote sources (Hadoop, SAP ASE, SAP IQ) with data of your SAP HANA physical tables, by exposing them as virtual tables.
  • Security: System administrator defines users and roles

  • 103. Where are the Column views located?
    The column views that you create are always located in schema _SYS_BIC, their metadata in schema _SYS_BI.
    If you run an SAP Application directly on SAP HANA, then the schema will always be SAP<SID_of _the_system>. For example, if your productive CRM System is called CRP, the corresponding schema will be SAPCRP.
    104. What are the SAP HANA Studio perspectives?
    • The SAP HANA Administration Perspective: is a main view to administer HANA systems.
      • Start and stop a system
      • Configure a system
      • Monitor a system
      • Backup and restore a system
      • Perform a problem analysis
    • The SAP HANA Modeler Perspectivecan create, manage, and transport information models (packages, information views, etc), define or extract data provisioning, configure the server, access SAP HANA documentation, etc.
    • The SAP HANA Development Perspective: Includes a development perspective with debugging functionality.
      • SAP HANA Extended Application Services (XS Server) is an application server, web server, and basis for an application development platform, that resides inside SAP HANA.
    105. What is a Delivery Unit in SAP HANA?
    Delivery unit is a single unit, which can be mapped to multiple packages and can be exported as single entity so that all the packages assigned to Delivery Unit can be treated as single unit.
    Users can use this option to export all the packages that make a delivery unit and the relevant objects contained in it to a HANA Server or to local Client location.
    The user should create Delivery Unit prior to using it.
    This can be done through HANA Modeler → Delivery Unit → Select System and Next → Create → Fill the details like Name, Version, etc. → OK → Add Packages to Delivery unit → Finish.
    106. Explain SAP HANA Engines?

    Run the Plan Visualization on your query to determine what engine is used to process a query. 


    • Join Engine:
      • Used when querying an Attribute View and also when running SQL script.
    • OLAP Engine:
      • Used for Analytic view (without calculated or derived columns).
    • Calculation Engine:
      • Used for Calculation views and Analytic views with calculated attributes
    107. What If I execute a standard ANSI92 SQL statement from a BI tool or in SAP HANA Studio. What engines will be used?
    Depends on what objects you reference in your query. If you're just querying plain base tables then the join engine will be used. As soon as you reference an analytic or calculation view, the other engines will be employed as well.
    108. If my Analytic View foundation is joined to attribute views, is both the OLAP and JOIN Engine used?
    Nope - during activation of the analytic views, the joins in the attribute views get 'flattened' and included in the analytic view run time object. Only the OLAP engine will be used then.
    109. Explain SAP HANA System Architecture?
    • The Index Server of the SAP HANA database is a core component that orchestrates the database's operations.
    • The Connection and Session Management which creates and manages sessions and connections for the database clients such as SAP BusinessObjects Reporting tools or applications.
    • The Transaction Manager coordinates transactions, controls transactional isolation, and keeps track of running and closed transactions.
    • Request Processing and Execution Control is a set of specialized engines and processors which handle the client requests are analyzed and executed.
    • SQL Processor receives the Incoming SQL requests and executes the Data Manaipulation Language (DML) statements - Insert, Select, Update, or Delete.
    • Metadata Manager handles Data Definition Lanaguage (DDL) statements, such as the definition of relational tables, columns, views, indexes and procedures.
    • Planning Engine handles Planning commands, wich allows financial planning apps to execute basic planning operations in the database layer.
    • Stored Procedure processor: handles procedure calls, which are written in SQLScript, a database programming language to execute application-specific calculations inside the database system.
    • MDX engine handles the Incoming MDX requests.
    • Calculation engine: Also handles the Incoming MDX requests, supports SQLScript, MD, and Planning operations.
    • Persistence layer component: manages the communication between the Index server and the File System that store the Data Volume and Transaction Log volume.
    • Name Server: The name server owns the information about the topology of SAP HANA system. In a distributed system, the name server knows where the components are running and which data is located on which server.
    • Statistic Server: The statistics server collects information about status, performance and resource consumption from the other servers in the system.. The statistics server also provides a history of measurement data for further analysis.
      • The new Statistics Server is also known as the embedded Statistics Server or Statistics Service.
      • The new Statistics Server is now embedded in the Index Server.
    110. How SAP HANA achieves High Availability and Disaster Recovery?
    • High Availability per DataCenter
      • High Availability configuration
        • N active servers in one cluster
        • M standby server(s) in one cluster
        • Shared file system for all servers
      • Services
        • Name and Index server on all nodes
        • Statistics server (only on one active server)
        • Name server active on Standby
      • Failover
        • Server X fails
        • Server N+1 reads indexes from shared storage and connects to logical connection of server X
    • SAP HANA Host Auto-Failover (Scale-Out with Standby)
      • High-Availability enables the failover of a node within one distributed SAP HANA appliance.
      • Failover uses a cold standby node and gets triggered automatically.
      • Landscape up to 3 master name-servers can be defined.
        • During startup one server gets elected as active master.
        • The active master assigns a volume to each starting index server or no volume in case of standby server.
      • Master name-server failure: In case of a master name-server failure, another of t he remaining name-servers will become active master.
      • Index-server failure: The master name-server detects an index-server failure and executes the failover. During the failover the master name-server assigns the volume of the failed index-server to the standby server.
    • Disaster Tolerance between DataCenters: SAP HANA provides SAP HANA Storage Replication and SAP HANA System Replication.
      • The mirroring is offered on the storage system level.
      • Performance impact is to be expected on data changing operations as soon as the synchronous mirroring is activated.
      • In case of emergency, the primary data center is not available any more and a process for the take-over must be initiated.
      • This take-over process then would end the mirroring officially, will mount the disks to the already installed HANA software and instances, and start up the secondary database side of the cluster. If the host names and instance names on both sides of the cluster are identical, no further steps with hdbrename are necessary.
      • Note: So far no hot standby via log shipping is available or even log shipping by recovering of log backups on a standby host.

    SAP HANA: Interview Questions 81 through 90

    81. What is a Delta Merge?
    Delta Merge is a regular database activity which merges the delta stores into the main store.

    82. Explain Insert Only on Delta?

    Updating and inserting data into a sorted column store table is an expensive activity, as the sort order has to be regenerated and thus the whole table is reorganized each time.
    For this reason SAP has tackled this challenge by separating these tables into a Main Store (read-optimized, sorted columns) and Delta Stores (write-optimized, non sorted columns or rows).

    • MAIN Store: SAP HANA stores after a merge (=reorganization) all data in a main store organized by columns.
    • DELTA 1 Store: Afterwards, all new entries are stored in a delta store which is also organized by columns. However, for speed purposes, the delta store dictionary is not sorted (contrary to the main store).
      • If the delta reaches a certain size, it merged back into the main store with the complete dictionary sorted.
    • DELTA 2 Store: For recording of high speed event like formula one sensor recording or mass RFID reading, data entry can go to a second delta store, which is organized as a row store. This store works as a very short term input buffer in order not to loose any sensor signal.This second delta store is frequently merged into the first delta store and thus into the main store.
    83. How does the performance vary between the Main, Delta1, and Delta2 stores?
    The Queries run against all stores simultaneously.
    • The main store being the largest but because of the sorted dictionary also is the fastest.
    • DELTA 1 is slightly slower for read queries but much faster for inserts.
    • DELTA 2 is very fast for insert but much slower for read queries, and therefore kept relatively small.
    84. What's orchestration in computing?
    Orchestration describes the automated arrangement, coordination, and management of complex computer systems, middleware and services.


     85. Explain SAP HANA Software Optimization?
    SAP applications are, of course, required to support not only SAP HANA but all databases which are certified for ABAP. For this reason, there is an enhancement in those ABAP programs which are SAP HANA optimized. In a Business Add-in (BAdI), those programs first check for the database in place. In case of SAP HANA, the optimized version is triggered; in the other case the classical ABAP
    flow is executed.
    Thus there are two versions of certain processes on the application layer:
    • Standard ABAP code working on every supported database
    • Optimized ABAP code working on SAP HANA only



    86. Explain SAP HANA deployment scenarios?
    Side-by-side scenario: In the side-by-side scenario, the database tables that are used by the SAP HANA Live products need to be replicated from the corresponding SAP Business Suite back-end system into the SAP HANA database.
    This is done using SAP Landscape Transformation Replication Server. If you want to execute SAP HANA Live views, the data from the corresponding tables must be available.

    Integrated scenario: In the integrated scenario, you do not need to create and replicate the database tables, as they are already available in the SAP HANA database.
    They are maintained through the data dictionary of the corresponding ABAP Application Server. Therefore, all steps regarding table creation and data replication are not relevant in this scenario.


    87. What is CO-PA (Controlling and Profitability Analysis)?
    Profitability Analysis (CO-PA) enables you to evaluate market segments, which can be classified according to products, customers, orders or any combination of these, or strategic business units, such as sales organizations or business areas, with respect to your company's profit or contribution margin.


    The aim of the system is to provide your sales, marketing, product management and corporate planning departments with information to support internal accounting and decision-making.


    Two forms of Profitability Analysis are supported: costing-based and account-based; both can be used simultaneously.


    Costing-based Profitability Analysis is the form of profitability analysis that groups costs and revenues according to value fields and costing-based valuation approaches, both of which you can define yourself. It guarantees you access at all times to a complete, short-term profitability report.

    Account-based Profitability Analysis is a form of profitability analysis organized in accounts and using an account-based valuation approach. The distinguishing characteristic of this form is its use of cost and revenue elements. It provides you with a profitability report that is permanently reconciled with financial accounting.


    88. Explain SAP HANA Studio?
    The SAP HANA Studio is delivered as part of the SAP HANA installation package, and provides an environment for administration, modeling, development and data provisioning.

    The SAP HANA studio is a Java-based application that runs on the Eclipse platform.

    89. Explain Perspectives in the SAP HANA Studio?
    • SAP HANA Modeler: The SAP HANA Modeler perspective is used by Data Architects to create Information Models.
    • Administration Console: The Administration Console perspective is used by SAP HANA Administrators to administrate and monitor the whole SAP HANA system.
    • Resources: The Resources perspective is used to organize files, such as text files, sql scripts, and so on, by project.
    • Other perspectives: Some perspectives in the SAP HANA Studio are designed for HANA applications development, Java development and Lifecycle Management.

    90. What's the default SAP HANA system port?
    The instance number is a 2-figures number that determines the communication port to the SAP HANA system
    For example, if you connect to an SAP HANA system with instance 00, the port used to communicate with the server will be 30015.








    SAP HANA: Interview Questions 91 through 100

    91. What is OLTP and OLAP?
    • OLTP - Online Transaction Processing
      • OLTP System deals with operational data. Operational data are those data  involved in the operation of a particular system.
    • OLAP - Online Analytical Processing
      • OLAP deals with Historical Data or Archival Data. Historical data are those data that are archived over a long period of time. Data from  OLTP are collected over a period of time and store it in a very large database called Data warehouse.
    92. Where was column-based storage primarily used?
    Historically, column-based storage was mainly used for analytics and data warehousing, where aggregate functions play an important role.
    93. When to use Row-Store or Column-Store?
    Row-based store: If you want to report on all the columns of a table, then the row store is more suitable because reconstructing the complete row is one the most expensive operations for a column-based table.
    Column-based store: If you want to store in a table huge amounts of data that should be aggregated and analyzed, then a column-based storage is more suitable.
    94. When a SAP database is migrated to SAP HANA, in which table storage type do the tables get stored?
    When a SAP system is migrated to SAP HANA, the SAP tables are automatically migrated into the storage type suited best. This logic is defined by SAP.
    The majority of tables are held in the Column Store.
    This information can be accessed in SAP HANA studio (Catalog > Open Definition) or in the technical settings of each table in the SAP dictionary (transaction SE13).
    With column-based storage, data is only partially blocked. Therefore, individual columns can be processed at the same time by different cores.
    95. How does SAP HANA compression work?
    Apart from performance reasons, column store offers much more potential leverage state-of-the-art data compression concepts.
    For example, SAP HANA works with bit encoded values and compresses repeated values, which results in much less memory requirements than for a classical row store table.
    96. What is a tuple?
    Row storage is used to store the data in the tabular form. In row storage, data is inserted in form of tuple. Each tuple is nothing but a row which is unique identification of each record.
    97. Explain Data Compression in the Column Store?
    Data in column tables can have a two-fold compression:
    • Dictionary compression: This default method of compression is applied to all columns.
      • It involves the mapping of distinct column values to consecutive numbers, so that instead of the actual value being stored, the typically much smaller consecutive number is stored.
    • Advanced compression: Each column can be further compressed using different compression methods, namely prefix encoding, run length encoding (RLE), cluster encoding, sparse encoding, and indirect encoding.
      • The SAP HANA database uses compression algorithms to determine which type of compression is most appropriate for a column.
      • Columns with the PAGE LOADABLE attribute are compressed with the NBit algorithm only.
      • Advanced compression is applied only to the main storage of column tables. As the delta storage is optimized for write operations, it has only dictionary compression applied.
    98. What are the different compression methods?
    • prefix encoding,
    • run length encoding (RLE),
    • cluster encoding,
    • sparse encoding, and
    • indirect encoding.
    99. what is a Compression Factor?
    The compression factor refers to the ratio of the uncompressed data size to the compressed data size in SAP HANA.
    The uncompressed data volume is a database-independent value that is defined as follows: the nominal record size multiplied by the number of records in the table. The nominal record size is the sum of the sizes of the data types of all columns.
    The compressed data volume in SAP HANA is the total size that the table occupies in the main memory of SAP HANA.
    100. How SAP HANA handles Partitioning?
    • spreads table contents across the blades
    • work on smaller sets of data in parallel