We are focusing to create a framework that can be easily used in the development. Developers are not required to master the big data knowledge, but they just concentrate on the development of business logic. The framework has included the optimization of the spark job. It shrinks the execution plan, reduces the data shuffling up to 70% between executors. To developers, they even are not required to write any spark code, the transformation logic is the only requirement that they need to complete. Once they done the business logic, then the excellent spark job can be easily deployed into the cluster. After the optimization, the framework can process 1T data within 15 minutes with complicated business logic.