12. ConclusionThe databases and programming take main place when it comes to their usage in quality of the developed product for the labor and that allows the time to be reduce, which is necessary for preparation of specific marketing and production projects, decrease of losses in their realization, elimination of the possibility of errors in the preparation of technological, business, accounting or another kind of documentation, which gives the organization immediate economic effect. In order to uncover all the potential opportunities that use databases, it is necessary to use a set of programming tools to cope with their tasks. That is why companies nowadays have huge demands on computer programs that support and coordinate the work of a company’s management and financial unit, as well as the optimal use of computer equipment.13. AppendicesTowards the course work there is an additional Microsoft SQL Server Query File, named “dskbank.sql”, which could be used for importing the database in the DBMS on your local server.In the chosen database Bank DSK, we have implemented the process of Database normalization, which refers to organizing the attributes and the relations in such a way to decrease the data redundancy and to increase the data integrity. Essentially it is a method which simplifies the design of the database in order to accomplish an optimal structure. The 2NF was first introduced by Edgar F. Codd in his conference paper, in 1971, who is also an inventor of the relation model. 11The second normal form refers to a property of relation in a relational database. It should cover the minimal requirements for the database normalization (1NF), which are: eliminating repetition of groups in individual tables, creating separated tables for each set of related data and identifying each set of related data with primary keys and also specifically non-prime attributes to be depended on any proper subset of any candidate key of the relation and the non-prime attribute of the relation is an qualifier on the whole of every candidate key. 12In the current course work we can notice that there are a lot of relations between the tables, which could be a problem if the database takes a big impact over the market in the future. That is why we have esteemed that for next version we should implement the Third normal form which is the third step of the normalization process and it refers to that all the column reference in references data not to be dependent on the primary key. It means that only the foreign key column should be used for relation to another table and no other columns from the parent table should exist in the referenced table. 9The other thing that we have considered to implement is Data warehouse architecture. The reasons for that is the data stored in the warehouse is uploaded from the operational systems, it can pass through an operational data store and may require data cleaning for ensuring data quality, before it is used for reporting. In addition, it is represented as a star schema, including staging, data integration and access layers, and the last layer helps to the customers to retrieve data. It supports decision making, analytical reporting, integration and consolidations of data. The gathered information in a warehouse could be used in one of the following domains: Regulation of production strategies, Client analysis (their buying time, preferences, and budget) and Operations analysis (giving the opportunity of environmental corrections and business operations). All of the listed factors will make the using of our database the service way more easily for us and clients. 10For the undertaken course work we have used a relational database management system, which refers to a database management system which is based on the relational model. This term has been first used by Edgar F. Codd in 1970. In the following three years, IBM has developed the System R – a research project – a prototype of RDBMS, but the first commercial product was released by Oracle in 1979. In his paper, he describes the word “relational view” as a ”clearer evaluation of the scope and logical limitation of present formatted data systems, and also relative merits of competing representations of data within a single system.” It should include presenting the data to the user as collection of tables and to provide relational operators to manipulate the data in it. 85. Literature ReviewThe undertaken books that we have used for implementing this database are “Beginning Database Design”, 4 “Advanced DBMS” 5 and “Database Systems. A practical Approach to Design, Implementation, and Management” 6 Also we have gone through the documentation for PhpMyAdmin, reports, and articles connected to database design, database forms and methodologies. The first book contains the introduction for the database design models, which is suitable for beginners and primarily knowledge for setting up a small, single-user database. It does not include information for concurrency, efficiencies, or how to manage large projects. It is not a book, which introduces the set of different database management systems. The second one – “Advanced DBMS” includes more detailed and particular information about planning the database, the structure of the database administration and the possibilities and usage of Microsoft Access. It also gives a recommendation for increasing the speed, executing complex operations and exercises. Another book, which was used for creating the work, was “Database Systems. A practical Approach to Design, Implementation, and Management”, it has been the most useful one, and the most substantial for the undertaken course work, due to the size and the way in which the information is presented. The covered topics are more than thirty; everything is described detail enough to be understood by beginners as well as proficient programmers. Although it does not include the Big Data topic, it contains beneficial data for architecture, design, forms, methodology, DDL (Data Definition Language), DML (Data Manipulation Language), DCL (Data Control Language), triggers, procedures, data mining, data warehousing, database security, administration and more. As a conclusion, we could say that reading more of one resource and viewing more than one perspective of some particular problem, you can analyze better and to create your own opinion and solution for your matter. That is why we have started from articles, have gone through reports and books for beginners, and have ended with a more sophisticated source.6. Methodology / Approach6.1. Conceptual data modelThe Conceptual Data Model refers to the highest-level relationships between the entities. It does not specify the attributes and the primary keys. The figure below presents the conceptual data model for our DSK bank database.6.2. Logical Data modelThe Logical Data Model refers to most detailed data including all the entities and their representative relationships, all the attributed for each entity, foreign keys and normalization.6.3. Physical Data ModelThe Physical Data Model refers to how the model will be built in the database, it shows the structure, including the column names, their datatypes, constraints, primary keys, foreign keys and the relationships between the tables. The steps for the physical data model design include the conversion of entities into models, the conversion of relationships into foreign keys and conversion of attributes into columns. 71. IntroductionThis coursework introduces the fundamentals of bank account information for particular bank and the relations between the database tables. It is focused on the fundamentals of database design and modeling, the languages, and models provided by the database management system and its techniques. We have developed the conceptual, logical and physical database model using phpMyAdmin SQL Dump server. It was used DML (Data Manipulation Language), DDL (Data Definition Language) and the DCL (Data Control Language) for creating and designing the tables with their respective history tables which are a mirror of the operational tables with additional attributes to support revision and a full point in time inquiry on any master data.We have chosen DSK bank as a case, due to the following reasons – it is the largest branch network in Bulgaria, it is one of the pioneers in the country and in the field of corporate social responsibility, DSK Bank became the first banking institution on the Balkan peninsula to introduce artificial intelligence through a hominoid robot in the process of consulting clients for its bank services, it takes care of S.O.S. families, it is offering to its clients credit insurance recipients’ security, it is the only one with Pensions for students and students (DSK Rodina). 22. DatabaseThe Database Management System that we have chosen for implementing the basic of DSK bank is PhpMyAdmin. It is free to use and an open source administration tool for MariaDB and MySQL. It is able to manage a whole MySQL server as well as a single database. Over the years, it has become one of the most popular tools, especially when it comes to web hosting services, due to the reason it is a web application which is written in PHP. The version which we have used is 4.7.4 with server version: 10.2.8 – MariaDB and PHP version: 5.6.31. The advantages of this DBMS are that it gives you the opportunity to edit the database schema like a diagram and to export the code, which saves time. The visual interface lets you navigate through the database pretty easily, because of its simplicity. Summarized, it is user-friendly and it gives you the feel of working comfortably with it. In addition, PhpMyAdmin is commonly installed on managed hosting environments, local resources are not needed when connecting and it is web-based, which leads to the point that you do not need access from any other computer. 33. TablesThe database consist thirteen tables, with their representative foreign keys, history tables and views. The tables and the history tables are connected with triggers as it follows, with the addition that in the history tables there is an additional column with Timestamp date (named ‘history_date’) in order to keep track of the alterations of the database.