DATA SECURITY IN MULTI CLOUD STORAGE

ABSTRACT
                        “Data Security In MultiCloud Storage” is web based application software developed in .NET framework. IN recent years, cloud storage service has become a faster profit growth point by providing a comparably low-cost, scalable, position-independent platform for clients’ data.
In cloud computing environment the integrity and security to the client data is not enough to provide an integrity to user data this application is used. Provable data possession is a technique for ensuring the integrity of data in storage outsourcing. In this application, I address the construction of an efficient PDP  (Provable Data Possession) scheme for distributed cloud storage to support the scalability of service and data migration, in which I consider the existence of multiple cloud service providers to cooperatively store and maintain the client’s data. I present a cooperative PDP scheme based on homomorphic verifiable response and hash index hierarchy. This prove the security of our scheme based on multi-prover zero-knowledge proof system, which can satisfy completeness, knowledge soundness, and zero-knowledge properties. In our cooperative PDP scheme the client can check the availability, integrity, Security of data in cloud storages. Timely detecting abnormality and renewing multiple copies of data. Reduce work load on the server
In addition, this articulate performance optimization mechanisms for our scheme, and in particular present an efficient method for selecting optimal parameter values to minimize the computation costs of clients and storage service providers. This application shows that our solution introduces lower computation and communication overheads in comparison with non-cooperative approaches. In this application alert the client user when his data file is illegally accessed by admin or some other third persons



CHAPTER 1
INTRODUCTION
1.1 SYSTEM OVERVIEW 
                        In recent years, cloud storage service has become a faster profit growth point by providing a comparably low-cost, scalable, position-independent platform for clients’ data. In cloud computing environment the integrity and security to the client data is not enough to provide integrity to user data this application is used. In our cooperative Provable data possession scheme the client can check the availability, integrity, Security of data in cloud storages. Timely detecting abnormality in the file and renewing multiple copies of data. Reduce work load on the server. This application shows that our solution introduces lower computation and communication overheads in comparison with non-cooperative approaches. In this application alert the client user when his data file is illegally accessed by admin or some other third persons

1.2 SCOPE OF THE PROJECT
                        In cloud computing environment the integrity and security to the client data is not enough to provide integrity to user data this application is used. Provable data possession is a technique for ensuring the integrity of data in storage outsourcing. In this application, This address the construction of an efficient Provable data possession scheme for distributed cloud storage to support the scalability of service and data migration, in which this consder the existence of multiple cloud service providers to cooperatively store and maintain the client’s data. This present a cooperative Provable Data Possession scheme based on homomorphic verifiable response and hash index hierarchy. This prove the security of our scheme based on multi-prover zero-knowledge proof system, which can satisfy completeness, knowledge soundness, and zero-knowledge properties. In our cooperative scheme the client can check the availability, integrity, Security of data in cloud storages. In addition, This articulate performance optimization mechanisms for our scheme, and in particular present an efficient method for selecting optimal parameter values to minimize the computation costs of clients and storage service providers. This application shows that our solution introduces lower computation and communication overheads in comparison with non-cooperative approaches.

1.3 PROBLEM STATEMENT
                        The problem definition of the existing system is it cannot update regularly the timely detecting and alerting the illegal access of the cloud user data. If somebody accessing the cloud user data means it creates a problem. Vulnerable to security attacks, it would bring irretrievable loss to the clients data Confidential data in an enterprise may be illegally access through a remote interface provided by a multi-cloud or relevant data  Archive may  be lost tampered with when they are stored into an uncertain storage pool outside the enterprise. It don’t  provide frequent updates, challenges to users and perform block insertions anywhere and Timely detecting abnormality renewing multiple copies of data It don’t Reduce work load on the server. Thus the drawback of existing schemes signifies the important need for new techniques that support the advanced features.





CHAPTER 2
SYSTEM ANALYSIS
2.1. INTRODUCTION
Systems analysis is a process of collecting factual data, understand the processes involved, identifying problems and recommending feasible suggestions for improving the system functioning. This involves studying the business processes, gathering operational data, understand the information flow, finding out bottlenecks and evolving solutions for overcoming the weaknesses of the system so as to achieve the organizational goals.

2.2 EXISTING SYSTEM
The basic the problems with the existing systems are Vulnerable to security attacks, it would bring irretrievable losses to the clients data Confidential data in an enterprise may be illegally accessed through a remote interface provided by a multi-cloud or relevant data Archive may be lost or tampered with when they are stored into an uncertain storage pool outside the enterprise.

2.2.1 Demerits
·        It cannot update regularly the timely detecting and alerting the illegal access of the cloud user data.
·        If somebody accessing the cloud user data means it creates a problem to cloud provider.
·         It don’t  provide frequent updates, challenges to users and perform block insertions anywhere and Timely detecting abnormality renewing multiple copies of data

2.3 PROPOSED SYSTEM
The proposed system is developed using .NET framework with C# and SQL Server 2008 R2.  The proposed system aims at In our cooperative PDP (CPDP) scheme the client can check the Availability integrity security. It provides frequent updates, challenges and challenges users and performs block insertions anywhere. Timely detecting abnormality renewing multiple copies of data Reduce work load on the server

2.3.1 Merits
·        In our cooperative PDP  scheme the client  can check the Availability, integrity, security of his cloud data
·         It provides frequent updates, challenges to users and performs block insertions anywhere.
·        Timely detecting abnormality Providing interactive interface through which the user can interact with different areas of application easily
·        Reduce work load on the server







CHAPTER 3
DEVELOPMENT ENVIRONMENT
3.1 DEVELOPMENT METHOD
                                This Integrated Software System was designed and developed based on the Waterfall Model. This model particularly expresses the interaction between subsequent phases. Testing software is not an activity, which strictly follows the implementation phase. In each phase of the software development process, This have to compare the results obtained against that which is required. In all phases quality has to be assessed and controlled.
3.2     HARDWARE REQUIREMENTS
       PROCESSOR                       :       PENTIUM IV 2.4 GHZ AND ABOVE
                   HARD DISK                          :       10 GB OR MORE
                   RAM                                       :       512 MB
                   KEYBOARD                         :       101 KEY BOARDS

3.3     SOFTWARE REQUIREMENTS
      WEB FRAMEWORK         :         ASP .NET FRAMEWORK
      CODING LANGUAGE      :       C#
      DATA BASE                         :       SQL SERVER 2008R2


3.4 SOFTWARE DESCRIPTION
3.4.1 Features of .Net
Microsoft .NET is a set of Microsoft software technologies for rapidly building and integrating XML Web services, Microsoft Windows-based applications, and Web solutions. The .NET Framework is a language-neutral platform for writing programs that can easily and securely interoperate. There’s no language barrier with .NET: there are numerous languages available to the developer including Managed C++, C#, Visual Basic and Java Script. The .NET framework provides the foundation for components to interact seamlessly, whether locally or remotely on different platforms. It standardizes common data types and communications protocols so that components created in different languages can easily interoperate.
                    “.NET” is also the collective name given to various software components built upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET Server, for instance) and services (like Passport, .NET My Services, and so on).

3.4.2 The .Net Framework
The .NET Framework has two main parts:
3.4.2.1 The Common Language Runtime (CLR).
3.4.2.2 A hierarchical set of class libraries.
The CLR is described as the “execution engine” of .NET. It provides the environment within which programs run. The most important features are Conversion from a low-level assembler-style language, called Intermediate Language (IL), into code native to the platform being executed on.
·                    Memory management, notably including garbage collection.
·                    Checking and enforcing security restrictions on the running code.
·                    Loading and executing programs, with version control and other such features.
·                    The following features of the .NET framework are also worth description


Figure 3.1 .Net Framework
3.4.3 Managed Code
The code that targets .NET, and which contains certain extra timings of a
Information - “metadata” - to describe itself. Whilst both managed and unmanaged code can run in the runtime, only managed code contains the information that allows the CLR to guarantee, for instance, safe execution and interoperability.

3.4.4 Managed Data
 With Managed Code comes Managed Data. CLR provides memory allocation and Deal location facilities, and garbage collection. Some .NET languages use Managed Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not. Targeting CLR can, depending on the language you’re using, impose certain constraints on the features available. As with managed and unmanaged code, one can have both managed and unmanaged data in .NET applications - data that doesn’t get garbage collected but instead is looked after by unmanaged code.

3.4.5 Common Type System
 The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety. This ensures that all classes are compatible with each other, by describing types in a common way. CTS define how types work within the runtime, which enables types in one language to interoperate with types in another language, including cross-language exception handling. As well as ensuring that types are only used in appropriate ways, the runtime also ensures that code doesn’t attempt to access memory that hasn’t been allocated to it.

3.4.6 Common Language Specification
 The CLR provides built-in support for language interoperability. To ensure that you can develop managed code that can be fully used by developers using any programming language, a set of language features and rules for using them called the Common Language Specification (CLS) has been defined. Components that follow these rules and expose only CLS features are considered CLS-compliant.
3.4.7 The Class Library
.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of the namespace is called System; this contains basic types like Byte, Double, Boolean, and String, as well as Object. All objects derive from System. Object. As well as objects, there are value types. Value types can be allocated on the stack, which can provide useful flexibility. There are also efficient means of converting value types to object types if and when necessary.
The set of classes is pretty comprehensive, providing collections, file, screen, and network I/O, threading, and so on, as well as XML and database connectivity. The class library is subdivided into a number of sets (or namespaces), each providing distinct areas of functionality, with dependencies between the namespaces kept to a minimum.

3.4.8 C#
C# is a simple, modern, object oriented language derived from C++ and Java [1]. It aims to combine the high productivity of Visual Basic and the raw power of C++. It is a part of Microsoft Visual Studio7.0.Visual studio supports VB, VC++, C++, vbscript, jscript.
All of these languages provide access to the Microsoft .NET platform. .NET includes a Common Execution engine and a rich class library. Microsoft’s JVM equiv. is Common language run time (CLR). CLR accommodates more than one language such as C#, VB.NET, Jscript, ASP.NET, C ++.


3.5 SQL SERVER 2008 R2
Microsoft SQL Server is a relational database management system (RDBMS) produced by Microsoft. Its primary query languages are MS-SQL and T-SQL. Microsoft SQL Server 2008 is a database platform for large-scale online transaction processing (OLTP), data warehousing, and e-commerce applications; it is also a business intelligence platform for data integration, analysis, and reporting solutions.
SQL Server 2008 introduces "studios" to help us with development and management tasks: SQL Server Management Studio and Business Intelligence Development Studio. In Management Studio, you develop and manage SQL Server Database Engine and notification solutions, manage deployed Analysis Services solutions, manage and run Integration Services packages, and manage report servers and Reporting Services reports and report models. In BI Development Studio, we develop business intelligence solutions using Analysis Services projects to develop cubes, dimensions, and mining structures Reporting Services projects to create reports; the Report Model project to define models for reports; and Integration Services projects to create packages. The major new  and improved features  of SQL server  7.0 include  the multi-user support  Multi-platform  support,  added  memory  support,  scalability,  integration  with MMC.
Microsoft  Management  console  and  improved  multiple  server  management, Parallel  database  backup  and  restore, Data  replication,  Data  warehousing  distributed queries, distributed  transactions, Dynamic cocking  Internet Access,  Integrated windows security, Mail integration Microsoft English Query, ODBC Support.SQL Server management is accomplished through a set of component applications. SQL Server  introduces a  number of new and  improved management tools that  are  SQL  Server  Enterprise  management,  profiles,  and  Query  Analyzer  service manager wizards.
3.5.1 Features of SQL-Server

The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the term Analysis Services. Analysis Services also includes a new data mining component. The Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data Services. References to the component now use the term Meta Data Services.
The term repository is used only in reference to the repository engine within Meta Data Services-SERVER database consist of six type of objects, they are
3.5.2 Table
  A database is a collection of data about a specific topic. The table is a collection of records. The collection of tables is called records.
3.5.3 Query
 A query is a question that has to be asked the data. Access gathers data that answers the question from one or more table
3.5.4 View of Table
The view of the table is in the form we can work with a table in two types they are
3.5.4.1. Design View
3.5.4.2. Datasheet View

3.5.4.1 Design View
          To build or modify the structure of a table we work in the table design view. We can specify what kind of data will be hold.
3.5.4.2 Datasheet View
            To add, edit or analyses the data itself we work in tables datasheet view mode. The data sheet view is based on the query we specify in the conditional formatting.

3.6 FEASIBILITY STUDY
             The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.
3.6.1 Economical Feasibility
                   This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

3.6.2 Technical Feasibility

               This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.  
3.6.3 Social Feasibility
      The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.













CHAPTER 4
SYSTEM DESIGN
4.1 INTRODUCTION
The term “design” is defined as “the process of applying various techniques and principles for the purpose of designing a process or a system in sufficient detail to permit its physical realization”. The system design transforms a logical representation of what a given system is required into the physical specification. In system design high-end decisions are taken regarding the basic system architecture, platforms and tools to be used. The system design transforms a logical representation of what a given system is required to be in to the physical specification. Design starts with the system’s requirement specification and converts it into a physical reality during the development. Important design factors such as reliability, response time, throughout of the system, and maintainability should be taken into account.

4.2 MODULE DESCRIPTION
The application has five modules each module plays a very important vital role
           4.2.1 Registration
            4.2.2 File Processing
 4.2.3 Data Maintenance and Verification
 4.2.4 Data Monitoring
 4.2.5 Security Alert

4.2.1 Registration
This module contains the registration form where the new user can enter details like Username, Password Name, Gender, Mobile Number, and Email Address and so on. The registered user only can access the cloud data.

4.2.2 File Processing

After creating an account in a cloud the user can upload and download the file using the secret key Uploading the files into cloud and downloading the files from the cloud. Before uploading and downloading the system checks the proper authentication for security purpose

4.2.3 Data Maintenance and Verification
The Administrator maintains the data and verifies whether he is valid user and valid data. The Administrator looks for data redundancy for the proper allocation of memory to each user. If any in-sufficiency he will make a proper action to rectify the problem

4.2.4 Data Monitoring
The Third Party Auditor who looks for the cloud is accessing by authenticated person or not. And monitor the cloud admin and user. The Third Party Auditor monitors the cloud data and control the illegal access to the cloud user data


4.2.5 Security Alert
The Third Party Auditor view cloud file details and admin work in it. And check and confirm that the cloud actions If admin illegally tries to access the cloud user data files means Third Party Auditor alert the client user by sending Email.

4.3 USE CASE DIAGRAM
A use case is a methodology used in system analysis to identify, clarify, and organize system requirements. The use case is made up of a set of possible sequences of interactions between systems and users in a particular environment and related to a particular goal.



                                                Figure 4.1 Use Case Diagram

A use case can be thought of as a collection of possible scenarios related to a particular goal, indeed, the use case and goal are sometimes considered to be synonymous.
A use case has these characteristics:
  • Organizes functional requirements
  • Models the goals of system/actor (user) interactions
  • Records paths (called scenarios) from trigger events to goals
  • Describes one main flow of events (also called a basic course of action), and possibly other ones, called exceptional flows of events (also called alternate courses of action)
  • Is multi-level, so that one use case can use the functionality of another one.
                        In this use case diagram there are three main users Owner, Admin, TPA
The Owner makes a registration, file processing and modification, The Admin maintains and verify the data and the TPA makes monitoring and send a security alert to Owner Account.

4.4 DATA FLOW DIAGRAM
A data flow diagram (DFD) is graphic representation of the "flow" of data through business functions or processes. More generally, a data flow diagram is used for the visualization of data processing. It illustrates the processes, data stores, and external entities, data flows in a business or other system and the relationships between these things. Physical DFD's represent physical files and transactions, or while logical or conceptual DFD's can be used to represent business functions.                                                                                                                                                                                                                               
A two-dimensional diagram that explains how the data is processed and transferred in a system. The graphical depiction identifies each source of data and how it interacts with other data sources to reach a common output.


Figure 4.2 Level - 1 Data Flow Diagram

Individuals seeking to draft a data flow diagram must (1) identify external inputs and outputs, (2) determine how the inputs and outputs relate to each other, and (3) explain with graphics how these connections relate and what they result in. This type of diagram helps business development and design teams visualize how data is processed and identify or improve certain aspects. This involves studying the business processes, gathering operational data, understand the information flow, finding out bottlenecks and evolving solutions for overcoming the weaknesses of the system so as to achieve the organizational goals.

Figure 4.3 Level - 2 Data Flow Diagram
The Level - 2 Data Flow Diagram contains the following modules of Registration, File Processing, Data Maintenance and Verification, Data Monitoring, Security Alert are specified. The client goes for a file processing, admin maintains the cloud data files and verify whether the cloud is accessing by valid user and the Third Party auditor performs a alert a client when any illegal access done in cloud.

4.5 SYSTEM ARCHITECTURE
Systems architecture is a process of collecting factual data, understand the processes involved, identifying problems and recommending feasible suggestions for improving the system functioning.
Design starts with the system’s requirement specification and converts it into a physical reality during the development. Important design factors such as reliability, response time, throughout of the system, and maintainability should be taken into account.

Figure 4.5 System Architecture

In this system architecture they are three layer of Application, Business and Data layer in which Data layer consists of user, admin and TPA. In the Business layer Registration, uploading the data, maintain the user data and security alert are used in the application layer the home page, user registration, verification, file processing, maintenance and if any illegal access to data a security alert are specified.

4.6 DATABASE DESIGN
Database design is the process of producing a detailed data model of a database. This logical data model contains all the needed logical and physical design choices and physical storage parameters needed to generate a design in a Data Definition Language, which can then be used to create a database. A fully attributed data model contains detailed attributes for each entity.
The table contains all the fieldnames along with their type, size and constraint. Each field name has its own types and size. Field types used are varchar, int. date and size used for each type varies as per requirement and each field has constraint Not Null.

4.6.1 Registration
                                           Table 4.1 Registration Form
FIELD NAME
DATATYPE
ALLOW NULL
Id
Int
YES
Username
Varchar (50)
NO
Password
Varchar (50)
NO
Gender
Varchar (10)
NO
Age
Numeric (3)
NO
Mobile
Numeric (10)
NO
Email id
Varchar (50)
NO
Date
Date
NO
The term database design can be used to describe many different parts of the design of an overall database system. The process of doing database design generally consists of a number of steps which will be carried out by the database designer. Usually, the designer must determine the relationships between the different data elements and superimpose a logical structure upon the data on the basis of these relationships.

4.6.2 File Archive
Table 4.2 File Archive
FIELD NAME
DATATYPE
ALLOW NULLS
Fileid
Int
YES
Filepath
Varchar (500)
NO
Filename
Varchar (200)
NO
Filetype
Varchar (200)
NO
Filedate
Varchar (30)
NO
Fileowner
Varchar (50)
NO
Fileencrpkey
Varchar (150)
NO
Fileverify
Varchar (10)
NO

                        In the File Archive the details of the record are stored of file identification, path, name, type, date, file owner, security key and verified status which is used to store the file properties so it is used by all cloud user for the verification purpose

4.6.3 File index
Table 4.3 File Index
FIELD NAME
DATATYPE
ALLOW NULL
Fileid
Int
YES
Filename
Varchar (200)
NO
Id
Int
YES
Owner name
Varchar (50)
NO
Cloud id
Varchar (50)
NO
Upload date
Date
NO

                        In the file Index table the file identification, filename, owner id, cloud identification, and upload date are specified. This table is used for client that his uploaded file is placed in which cloud and also he can access the properties of the uploaded file









CHAPTER 5
SYSTEM TESTING
5.1 TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement.

5.2. TYPES OF TESTING  
5.2.1 Unit Testing
Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program inputs produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration.

Table 5.1 Unit Testing
 TEST CASE
GIVEN INPUT
EXPECTED OUTPUT
ACTUAL OUTPUT
STATUS
Registration
New User or Already Member
Successfully Registered new user
If the user is already member perform Action
Pass
Username password Authentication
Already registered user name
Successfully Login only already registered
If the user name is new Doesn’t perform action
Pass
File Upload
Secret Key
Successfully Uploaded
File not uploaded
pass
File Download
Secret Key given by user on uploading
Successfully Downloaded
Valid key
Pass
Checking
To check the files only the registered member

Checking Successfully
Valid user
Pass

Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results.

5.2.2 Validation Testing
It provides final assurances that software meets all functional, behavioral & performance requirement. Black box testing techniques are used.
There are three main components
·               Validation test criteria (no. in place of no. & char in place of char)
·               Configuration review (to ensure the completeness of s/w configuration.)
·               Alpha & Beta testing-Alpha testing is done at developer’s site i.e. at home & Beta testing once it is deployed. Since I have not deployed my application, I could not do the Beta testing.

Table 5.2 Validation Testing
Test Case
Input Given
Expected Output
Actual Output
Status (Pass / Fail)
1
Correct user id and password
Login to account
Login to account
Pass
2
Invalid user id and correct password
Login to account
Access Denied
Fail
3
Password mismatch at signup
Retype correct password
Retype correct password
Pass
4
10 digit phone no at signup
Accept any no
Accept only 10 digit no
Fail
5
Any key to download a file
File download
File not download
Fail
6
Valid key to download a file
File download
File download
Pass

Test Cases- I have used a number of test cases for testing the product. There different cases for which different inputs were used to check whether desired output is produced or not.

5.2.3 Black Box Testing

Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as specification or requirements document, such as specification or requirements document. It is a testing in which the software under test is treated, as a black box .you cannot “see” into it. The test provides inputs and responds to outputs without considering how the software works.
Table 5.3 Black Box Testing
Test case id
Test description
Expected output
Status
1
Proper format of data inputs
Alert the user
Pass
2
File upload inputs
Alert the user
Pass
3
Testing for proper download input
Alert the user
Pass
4
Testing for  file verification status
Update the status
Pass
5
Testing for already verified files
Warn the user
Pass
6`
Display the modified file
Displayed
Pass
7
Email alert
Message send
Pass

In the Black box testing the proper format of data input, verification status, and sending a Email are mainly concentrated, in the output  status alert, update and warn are expected




CHAPTER 6
SYSTEM IMPLEMENTATION AND MAINTENANCE
6.1 SYSTEM IMPLEMENTATION
Implementation is the process of having the system personal checks out and put new equipments to use, train the user to use the new system and construct any file that are needed to see it. The final and important phase in the system life cycle is the implementation of the new system. The file conversion is the most time consuming and expensive activity in the implementing stage.
System implementation refers to the step necessary to install a new system to put into operation. The implementation has different meaning, ranging from the conversion of a basic application that take place to convert from old system to new one. The new system may be totally new replacing an existing manual or automated system or it may be major modification to an existing system. The method of implementation and time scale adopted is found out initially. The system is tested properly and at the same time the users are trained in the new procedure. Proper implementation is essential to provide a reliable system to meet organization requirements. Successful implementation may not guarantee improvement in the organization using the new system, but will prevent improper installation.
·        Careful planning
·        Investigation of the system and constraints
·        Design the methods to achieve the change over
6.2 MAINTENANCE

Software maintenance in software engineering is the modification of a software product after delivery to correct faults, to improve performance or other attributes. A common perception of maintenance is that it merely involves fixing defects. However, one study indicated that the majority, over 80%, of the maintenance effort is used for non-corrective actions (Pigosky 1997). This perception is perpetuated by users submitting problem reports that in reality are functionality enhancements to the system
Software maintenance and evolution of systems was first addressed by Meir M. Lehman in 1969. Over a period of twenty years, his research led to the formulation of Lehman's Laws (Lehman 1997). Key findings of his research include that maintenance is really evolutionary development and that maintenance decisions are aided by understanding what happens to systems (and software) over time. Lehman demonstrated that systems continue to evolve over time. As they evolve, they grow more complex unless some action such as code refactoring is taken to reduce the complexity. The key software maintenance issues are both managerial and technical. They categorized maintenance activities into classes:
  •  Adaptive – dealing with changes and adapting in the software environment
  •  Perfective – accommodating with new or changed user requirements      
  •  Corrective and Preventive





CHAPTER 7
CONCLUSION AND FUTURE ENHANCEMENT
7.1 CONCLUSION
Cloud storage service has become a faster profit growth point by providing a comparably low-cost, scalable, position-independent platform for clients’ data. In cloud computing environment the integrity and security to the client data is not enough to provide integrity to user data this application is used. In our cooperative Provable data possession scheme the client can check the availability, integrity, Security of data in cloud storages. Timely detecting abnormality renewing multiple copies of data. Reduce work load on the server. This application shows that our solution introduces lower computation and communication overheads in comparison with non-cooperative approaches. In this application alert the client user when his data file is illegally accessed by admin or some other third persons

7.2 FUTURE ENHANCEMENT

In this proposed system the TPA alert the cloud user by email. In the future enhancement the alert should be send to cloud user by short message service or multimedia message service and if the cloud user is in offline also the message should alert him. The uploading and downloading speed should not low when the cloud server become down. Providing full-fledged security to cloud user’s data and files.



APPENDIX 1
SCREEN SHOTS

Figure A1.1 Main Page
                        The application main page contains the login page for Third Party Auditor, Administrator and owner. First the owner should buy a cloud from the cloud provider and makes a registration and then he can access the cloud and signup page for owner to create the account in cloud..



      
Figure A1.2 Uploading the File in Multi Cloud
The cloud user should fills the details in owner registration form to create account in cloud after creating a account the user can upload and download his data in multi cloud. In the file uploading process the owner should fill the details of file id, subject, uploading file path and Tag value in this any data is empty means the cloud can’t upload the data.
  




Figure A1.3 Maintaining the Files
                       After owner uploading a data successfully the administrator maintains the cloud user files and sees whether the files are accessing by authenticated users. In the owner file details the file properties of  file id, file name, file subject, file type, file owner, date, verify status are mentioned.






Figure A1.4 File Details
The administrator sees the files details of the owners and maintains the memory status of their cloud and if any problem occurs means he can trouble shoot and also administrator verify the cloud owner file by direct verification method.







Figure A1.5 File Verification
                        The administrator verify the cloud user uploaded data and check any suspicious or malicious data specified in it. After clicking the direct verification the content of the file will be displayed and after checking and validating all the things then  he allows the  file to store in cloud. If the file considered to be an suspicious file the file will be warned and deleted from cloud.


                                                                       

.
Figure A1.6 TPA Home Page
                        After Admin verified an file the data will be updated in both owner and TPA home page. If the admin or Third Person illegally tries to access the cloud data means the alert will be goes to Third Party Auditor. In the home page of TPA the status of the modified file will be updated.                        
                                               





Figure A1.7 TPA Owner Details
                        After TPA saw the home page he can understand that the file is illegally accessed by some one. On selecting the affected file list the owner details will be displayed. If the file is accessed by someone. The TPA see the modified file details and get the owner details for sending a alert.
                  





Figure A1.8 TPA Security Alert
                        The TPA prepares a Email alert to the owner of the cloud data. In the modified file the affected file will be displayed. In the modified file label the list of modified files are available on selecting the file the owner details will be automatically updated. In the message text area the file name, file owner and the message you going to send are specified on submitting the button the message goes to owner Email account.



Figure A1.9 The Owner Email Account After Receiving the Alert
                        After TPA sending the Email Alert to the Owner of the File the Owner will see the Alert and  checks whether modification done in his file but normally in our application nobody can illegally access other files if he tries to access means the cloud stop them and alert the owner of the file and make a data security in multi cloud storage.







APPENDIX 2
SAMPLE SOURCE CODE
ADMIN FILE VERIFICATION

public class OfficeFileReader
{
public void GetText(String path, ref string text)
// path is the path of the .doc, .xls or .ppt  file
// text is the variable in which all the extracted text will be stored
{
String result = "";
int count = 0;
try
{
IFilter ifilt = (IFilter)(new CFilter());
//System.Runtime.InteropServices.UCOMIPersistFile ipf = (System.Runtime.InteropServices.UCOMIPersistFile)(ifilt);
System.Runtime.InteropServices.ComTypes.IPersistFile ipf = (System.Runtime.InteropServices.ComTypes.IPersistFile)(ifilt);
ipf.Load(@path, 0);
uint i = 0;
STAT_CHUNK ps = new STAT_CHUNK();
ifilt.Init(IFILTER_INIT.NONE, 0, null, ref i);
int hr = 0;
while (hr == 0)
{
ifilt.GetChunk(out ps);
if (ps.flags == CHUNKSTATE.CHUNK_TEXT)
{
uint pcwcBuffer = 1000;
int hr2 = 0;
while (hr2 == Constants.FILTER_S_LAST_TEXT || hr2 == 0)
{
try
{
pcwcBuffer = 1000;
System.Text.StringBuilder sbBuffer = new StringBuilder((int)pcwcBuffer);
hr2 = ifilt.GetText(ref pcwcBuffer, sbBuffer);
// Console.WriteLine(pcwcBuffer.ToString());
if (hr2 >= 0) result += sbBuffer.ToString(0, (int)pcwcBuffer);
//textBox1.Text +=”\n”;
// result += “####”;
count++;
}
catch (System.Runtime.InteropServices.COMException myE)
{
Console.WriteLine(myE.Data + "\n" + myE.Message + "\n");
}}
}}
}
catch (System.Runtime.InteropServices.COMException myE)
{
Console.WriteLine(myE.Data + "\n" + myE.Message + "\n");
}
text = result;
return;
}}
}
public partial class tpadirectvermod : System.Web.UI.Page
{
SqlConnection con = new SqlConnection(ConfigurationManager.AppSettings["ConnectionString"]);
string fverify = "YES", fileid, fileext, path, filename;
string fmodify = "NO";
int k = 1;
protected void Page_Load(object sender, EventArgs e)
{
fileid = (string)Session["FileID"];
SqlDataAdapter adp = new SqlDataAdapter("Select * from filearchive where fid='" + fileid + "'", con);
DataSet ds = new DataSet();
adp.Fill(ds);
fileext = ds.Tables[0].Rows[0]["fext"].ToString();
path = ds.Tables[0].Rows[0]["filepath"].ToString();
filename = ds.Tables[0].Rows[0]["ffilename"].ToString();
if (fileext == ".txt" || fileext == ".TXT")
{
TextReader tr = new StreamReader(path);
TextBox2.Text = tr.ReadToEnd();
Image1.Visible = false;
}
else
{
if (fileext == ".doc" || fileext == ".DOC")
{
//FileStream fstream = new FileStream(path, FileMode.Open, FileAccess.Read);
//StreamReader sreader = new StreamReader(fstream);
//TextBox2.Text = sreader.ReadToEnd();
//Image1.Visible = false;
OfficeFileReader.OfficeFileReader objOFR = new OfficeFileReader.OfficeFileReader();
string output = "";
objOFR.GetText(path, ref output);
Console.WriteLine(output);
TextBox2.Text = output;
Image1.Visible = false;
}
else
{
if (fileext == ".docx" || fileext == ".DOCX")
{
FileStream fstream = new FileStream(path, FileMode.Open, FileAccess.Read);
StreamReader sreader = new StreamReader(fstream);
TextBox2.Text = sreader.ReadToEnd();
Image1.Visible = false;
}
else
{
if (fileext == ".pdf" || fileext == ".PDF")
{
FileStream fstream = new FileStream(path, FileMode.Open, FileAccess.Read);
StreamReader sreader = new StreamReader(fstream);
string line;
for (int i = 0; i < 100; i++)
{
if (sreader.ReadLine() == null)
{
goto Outer;}
else
{line = sreader.ReadLine();
TextBox2.Text = TextBox2.Text + line;
}}
Outer:
k = k + 1;
Image1.Visible = false;
sreader.Close();
fstream.Close();
}else
{if (fileext == ".jpg" || fileext == ".JPG")
{
Image1.ImageUrl = "Uploads/" + filename;
Image1.Visible = true;
TextBox2.Visible = false;
}else
{if (fileext == ".jpeg" || fileext == ".JPEG")
{
Image1.ImageUrl = "Uploads/" + filename;
Image1.Visible = true;
TextBox2.Visible = false;
}
else
{
if (fileext == ".gif" || fileext == ".GIF")
{
Image1.ImageUrl = "Uploads/" + filename;
Image1.Visible = true;
TextBox2.Visible = false;
}
else
{
Image1.Visible = true;
TextBox2.Visible = true;
TextBox2.Text = "File Formatt Not Supported. You can verify directly or downloaded?";
}}
}}}}
}}
protected void ImageButton2_Click(object sender, ImageClickEventArgs e)
{
con.Open();
SqlCommand cmd1 = new SqlCommand("update filearchive set fverify='" + fverify + "' where fid='" + (string)Session["FileID"] + "'", con);
cmd1.ExecuteNonQuery();
SqlCommand cmd2 = new SqlCommand("update filearchive set fmodify='" + fmodify + "' where fid='" + (string)Session["FileID"] + "'", con);
cmd2.ExecuteNonQuery();
con.Close();
}
protected void Button1_Click(object sender, EventArgs e)
{
//con.Open();
//SqlCommand cmd1 = new SqlCommand("update filearchive set fverify='" + fverify + "' where fid='" + (string)Session["FileID"] + "'", con);
//cmd1.ExecuteNonQuery();
//SqlCommand cmd2 = new SqlCommand("update filearchive set fmodify='" + fmodify + "' where fid='" + (string)Session["FileID"] + "'", con);
//cmd2.ExecuteNonQuery();
//con.Close();
}
protected void Button3_Click(object sender, EventArgs e)
{
con.Open();
SqlCommand cmd1 = new SqlCommand("update filearchive set fverify='" + fverify + "' where fid='" + (string)Session["FileID"] + "'", con);
cmd1.ExecuteNonQuery();
SqlCommand cmd2 = new SqlCommand("update filearchive set fmodify='" + fmodify + "' where fid='" + (string)Session["FileID"] + "'", con);
cmd2.ExecuteNonQuery();
if (CheckBox1.Checked == true)
{
SqlCommand cmd3 = new SqlCommand("update filearchive set fmodify='YES' where fid='" + (string)Session["FileID"] + "'", con);
cmd3.ExecuteNonQuery();
}
con.Close();
//ModalPopupExtender1.Show();
}
protected void Button2_Click(object sender, EventArgs e)
{
}
protected void CheckBox1_CheckedChanged(object sender, EventArgs e)
{
}}

Admin verify the two thing of valid user and also valid data here application used a Filter function to access the cloud only by authenticated user if any other user tries to access means it warn them and it don’t allow them to access the cloud. After successful verification of data it automatically uploaded to cloud.

TPA MONITORING

public partial class AdminMain : System.Web.UI.Page
{
SqlConnection con = new SqlConnection(ConfigurationManager.AppSettings["ConnectionString"]);
string fverifyYES = "YES", fverifyNO = "NO", fmodify = "YES", fmodify1 = "NO";
protected void Page_Load(object sender, EventArgs e)
{SqlDataAdapter adp = new SqlDataAdapter("select * from filearchive where fmodify='" + fmodify + "'", con);
DataSet ds = new DataSet();
adp.Fill(ds);
Label20.Text = Convert.ToString("(" + ds.Tables[0].Rows.Count + ")");
if (ds.Tables[0].Rows.Count == 0){
Label2.Text = "<b>" + Label20.Text + "</b>" + " Number of TPA Modification files.";
lbmodify.Visible = false;
}else{
Label2.Text = "<b>" + Label20.Text + "</b>" + " Number of TPA Modification responses files are there, if you need to open";
}
//Verify Files
SqlDataAdapter adp1 = new SqlDataAdapter("select * from filearchive where fverify='" + fverifyYES + "' and fmodify='" + fmodify1 + "'", con);
DataSet ds1 = new DataSet();
adp1.Fill(ds1);
Label4.Text = Convert.ToString("(" + ds1.Tables[0].Rows.Count + ")");
if (ds1.Tables[0].Rows.Count == 0)
{
Label5.Text = "<b>" + Label4.Text + "</b>" + " Number of verification files.";
lbverify.Visible = false;
}else
{Label5.Text = "<b>" + Label4.Text + "</b>" + " Number of verification files are there, if you need to open";
}//Pending Files
SqlDataAdapter adp2 = new SqlDataAdapter("select * from filearchive where fverify='" + fverifyNO + "'", con);
DataSet ds2 = new DataSet();
adp2.Fill(ds2);
Label7.Text = Convert.ToString("(" + ds2.Tables[0].Rows.Count + ")");
if (ds2.Tables[0].Rows.Count == 0)
{
Label8.Text = "<b>" + Label7.Text + "</b>" + " Number of verification files.";
lbpending.Visible = false;
}else
{Label8.Text = "<b>" + Label7.Text + "</b>" + " Number of verification files are there, if you need to open";}
//All Files
SqlDataAdapter adp3 = new SqlDataAdapter("select * from filearchive", con);
DataSet ds3 = new DataSet();
adp3.Fill(ds3);
Label10.Text = Convert.ToString("(" + ds3.Tables[0].Rows.Count + ")");
if (ds3.Tables[0].Rows.Count == 0)
{
Label11.Text = "<b>" + Label10.Text + "</b>" + " Number of Owner Files.";
lbpending.Visible = false;
}
else
{
Label11.Text = "<b>" + Label10.Text + "</b>" + " Number of Owner Files are there, if you need to open";
}
}

After Admin verify the file the database will be updated and if Admin or some other modifies or delete the data means an alert goes to Third Party auditor so, The TPA frequently monitor the Cloud and provide security to cloud data


TPA SECURITY ALERT

public partial class AdminQuts : System.Web.UI.Page
{
SqlConnection con = new SqlConnection(ConfigurationManager.AppSettings["ConnectionString"]);
string emailid;
string gMailAccount = "deivasigamani18@gmail.com";
string password = "deivas700715";
string to;
string subject = "Warning From ADMIN";
protected void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
{
SqlDataAdapter adp = new SqlDataAdapter("Select ffilename from filearchive where fmodify='YES'", con);
DataSet ds = new DataSet();
adp.Fill(ds);
for (int i = 0; i < ds.Tables[0].Rows.Count; i++)
{
DropDownList1.Items.Add(ds.Tables[0].Rows[i]["ffilename"].ToString());
}}}
protected void DropDownList1_SelectedIndexChanged(object sender, EventArgs e)
{
DropDownList2.Items.Clear();
DropDownList2.Items.Insert(0, "--Select--");

if (DropDownList1.SelectedItem.Text == "--Select--")
{
DropDownList2.SelectedItem.Text = "--Select--";
TextBox2.Text = "";
string myStringVariable1 = string.Empty;
myStringVariable1 = "Select Any One File";
ClientScript.RegisterStartupScript(this.GetType(), "myalert", "alert('" + myStringVariable1 + "');", true);
}
else
{
if (DropDownList1.SelectedItem.Text == "None")
{
TextBox2.Text = "File Name : " + DropDownList1.SelectedItem.Text + " ! " + " File Owner : " + DropDownList2.SelectedItem.Text + " ! ";
DropDownList2.SelectedItem.Text = "--Select--";
SqlDataAdapter adp = new SqlDataAdapter("Select distinct fowner from filearchive", con);
DataSet ds = new DataSet();
adp.Fill(ds);
for (int i = 0; i < ds.Tables[0].Rows.Count; i++)
{
DropDownList2.Items.Add(ds.Tables[0].Rows[i]["fowner"].ToString());
}
}
else
{
SqlDataAdapter adp1 = new SqlDataAdapter("Select fowner from filearchive where ffilename='" + DropDownList1.SelectedItem.Text + "'", con);
DataSet ds1 = new DataSet();
adp1.Fill(ds1);
DropDownList2.SelectedItem.Text = ds1.Tables[0].Rows[0]["fowner"].ToString();
Button1.Enabled = true;
TextBox2.Text = "File Name : " + DropDownList1.SelectedItem.Text + " ! " + " File Owner : " + DropDownList2.SelectedItem.Text + " ! ";
}}}
protected void DropDownList2_SelectedIndexChanged(object sender, EventArgs e)
{
if (DropDownList2.SelectedItem.Text == "--Select--")
{
Button1.Enabled = false;
}
else
{
//TextBox2.Text = "<hr><br>" + " File Name : " + DropDownList1.SelectedItem.Text + " ! <br><br>" + " File Owner : " + DropDownList2.SelectedItem.Text + " ! <br><br>";
TextBox2.Text = "File Name : " + DropDownList1.SelectedItem.Text + " ! " + " File Owner : " + DropDownList2.SelectedItem.Text + " ! ";
Button1.Enabled = true;
}
}
protected void Button1_Click(object sender, EventArgs e)
{
if (DropDownList2.SelectedItem.Text == "--Select--")
{
string myStringVariable1 = string.Empty;
myStringVariable1 = "Select Owner";
ClientScript.RegisterStartupScript(this.GetType(), "myalert", "alert('" + myStringVariable1 + "');", true);
}
else
{
SqlDataAdapter adp1 = new SqlDataAdapter("Select emailid from Registration where ownerid='" + DropDownList2.SelectedItem.Text + "'", con);
DataSet ds1 = new DataSet();
adp1.Fill(ds1);
emailid = ds1.Tables[0].Rows[0]["emailid"].ToString();
to = emailid;
NetworkCredential loginInfo = new NetworkCredential(gMailAccount, password);
MailMessage msg = new MailMessage();
msg.From = new MailAddress(gMailAccount);
msg.To.Add(new MailAddress(to));
msg.Subject = subject;
msg.Body = TextBox2.Text;
msg.IsBodyHtml = true;
try
{
SmtpClient client = new SmtpClient("smtp.gmail.com");
client.EnableSsl = true;
client.UseDefaultCredentials = false;
client.Credentials = loginInfo;
client.Send(msg);
string myStringVariable1 = string.Empty;
myStringVariable1 = "Message Sent To Owner";
ClientScript.RegisterStartupScript(this.GetType(), "myalert", "alert('" + myStringVariable1 + "');", true);
}
catch (Exception ex)
{
string myStringVariable1 = string.Empty;
myStringVariable1 = "OFFLINE : Message Not Sent To Owner";
ClientScript.RegisterStartupScript(this.GetType(), "myalert", "alert('" + myStringVariable1 + "');", true);
}}}
protected void Button2_Click(object sender, EventArgs e)
{
TextBox2.Text = "";
}
}

After Admin verified an file the data will be updated in both owner and TPA home page. If the admin or Third Person illegally tries to access the cloud data means the alert will be goes to Third Party Auditor. In the home page of TPA the status of the modified file will be updated. The TPA prepares a Email alert to the owner of the cloud data. In the modified file list the affected file will be displayed. In the modified file label the list of modified files are available on selecting the file the owner details will be automatically updated. In the message text area the file name, file owner and the message you going to send are specified on submitting the button the message goes to owner Email account.

REFERENCES
   LIST OF BOOKS
         [1] Matthew MacDonald (2002), Beginning ASP.NET 4: in C# and VB.,
               Second Edition, Tata McGraw- Hill Publishing Company Limited,

          [2] Joy dip Kanjilal (2009), ASP.NET 4.0 Programming, First Edition,
              Tata McGraw- Hill Publishing Company Limited, New Delhi.

        [3] Y. Zhu, H. Hu, G.-J. Ahn, Y. Han, and S. Chen, “Collaborative
integrity verification in hybrid clouds,” in IEEE Conference

        [4] Jesse Liberty (2008), Programming in Asp.Net 3.5, Fourth Edition,
                Tata McGraw- Hill Publishing Company Limited, New Delhi.

        [5] B. Sotomayor, R. S. Montero, I. M. Llorente, And I. T. Foster,
             “Virtual Infrastructure Management In Private And Hybrid
              Clouds,” Ieee Internet Computing,

        [6] Q. Wang, C.Wang, J. Li, K. Ren, And W. Lou, “Enabling Public
             Verifiability And Data Dynamics For Storage Security In Cloud
             Computing,”

        [7] K. D. Bowers, A. Juels, And A. Oprea, “Hail: A High-Availability
              And Integrity Layer For Cloud Storage,”


  LIST OF WEBSITES
        [1] https://drive.google.com
        [2] http://www.amazon.com/clouddrive
        [3] https://www.dropbox.com
        [4] https://skydrive.live.com

        [5] http://www.justcloud.com

No comments:

Post a Comment