Stage 1: Mastering Java Web Data Visualization
You need to master the Java server-side technology, front-end visualization technology, database technology, this stage is mainly to stock up on the predecessor skills of big data, of course, you can already be engaged in the work of the data visualization engineers, but can not be counted as a real introduction to big data.
Stage 2: Learn Hadoop core and ecosystem technology stack
This part of the technology covered by more, such as HDFS Distributed Storage, MapReduce, Zookeeper, Kafka and so on you have to master, mastery can be engaged in some big data, such as ETL engineers. ETL engineers and some other big data positions, but the knowledge base is not complete.
Stage 3: Getting the computation engine and analysis algorithms
The computation engine I suggest is Spark and Flink can be used proficiently, although some companies are still using Spark, but the future of Flink will certainly become mainstream. Learning this, you have a relatively complete big data skills, can engage in some high-paying positions, such as big data research and development engineers, recommending system engineers, user profiling engineers and so on.