Emergence of big data sources at a rapid rate extends the necessity of old data organization due to the large volume, velocity, variety, value, and veracity of this data. Performing timely analysis on huge datasets is the central promise of big data analytics. The frameworks used to compose analytics jobs into a Directed Acyclic Graphs of small tasks, and then aggregate the intermediate results from the tasks to obtain the final result, does so with the help of a scheduler and a reliable storage layer that distributes the datasets on different machines. This paper presents the above two aspects, scheduling and storage, describe their key principles, and how these principles are realized in widely-deployed systems. The aim of this research project is to design a novel memory management for in-memory databases. Special designed hardware architectures can support the memory management of the host processor. This paper also explores how special designed hardware architectures can upkeep and fast-track data attainment, data straining and data investigation.