Empirical Assessment of Privacy-Preserving Techniques in Big Data Analytics
DOI:
https://doi.org/10.29070/qnkj2h53Keywords:
privacy-preserving, data analytics, big data, multiple healthcare, filtering techniqueAbstract
A privacy-preserving paradigm for large data data mining is proposed in this research. Two potential use cases have been defined according to the suggested architecture. The first is a collaborative filtering method that protects patients' privacy and is used to generate recommendations in a healthcare system where patients' data is randomly dispersed across different locations. When data is handled quickly and the information it contains can be used as a basis for decisions, the true benefit of big data may be realized in today's data-driven world. Using data mining techniques, valuable insights and patterns have been uncovered inside massive databases. Aside from the potential benefits to analytics, there are a number of security concerns that might arise from making all of this data available to data miners. This is because bad actors could potentially abuse this data. Therefore, it is important to strike a balance when it comes to the availability and security of data, as it is necessary to protect critical information without compromising application performance.
References
Venish, Luan & Blake, Harrison & Bernardi, G.. (2023). Privacy-Preserving Techniques in Big Data Analytics: Challenges and Opportunities.
Shah, Samarth & Khan, Shakeb. (2024). Privacy-Preserving Techniques in Big Data Analytics. 1. 521-541.
Baig, Hidayath. (2020). Privacy-Preserving in Big Data Analytics: State of the Art.
Shekhawat, Hema & Sharma, Samiksha & Koli, Reetika. (2019). Privacy-Preserving Techniques for Big Data Analysis in Cloud. 1-6. 10.1109/ICACCP.2019.8882922.
Tran, Yen & Hu, Jiankun. (2019). Privacy-preserving big data analytics a comprehensive survey. Journal of Parallel and Distributed Computing. 134. 10.1016/j.jpdc.2019.08.007.
Acar, A., Aksu, H., Uluagac, A. S., & Conti, M. (2017). A survey on homomorphic encryption schemes: Theory and implementation. ACM Computing Surveys (CSUR), 49(4), 1–35.
Shokri, R., Stronati, M., Song, C., & Shmatikov, V. (2017). Membership inference attacks against machine learning models. IEEE Symposium on Security and Privacy (SP), 3–18.
Kairouz, P., McMahan, H. B., Avent, B., et al. (2019). Advances and open problems in federated learning. Foundations and Trends in Machine Learning, 14(1-2), 1–210.
Yang, Q., Liu, Y., Chen, T., & Tong, Y. (2019). Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2), 1–19
Gentry, C., Halevi, S., & Vaikuntanathan, V. (2015). Homomorphic encryption from learning with errors: Concept and efficiency. Proceedings of the IEEE Symposium on Foundations of Computer Science (FOCS), 97–106.
Dwork, C., & Roth, A. (2016). The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3–4), 211–407
Zhu, T., Li, G., & Zheng, K. (2020). Privacy-preserving data mining techniques: Trends and challenges. IEEE Transactions on Knowledge and Data Engineering, 32(1), 141–156
Chamikara, M. A. P., Bertok, P., Khalil, I., Liu, D., & Camtepe, S. (2018). Efficient privacy-preserving protocol for big data storage and querying. Future Generation Computer Systems, 83, 151–160
Zhang, Y., Yang, Q., & Chen, T. (2021). Privacy-preserving machine learning with homomorphic encryption and secure computation. ACM Computing Surveys (CSUR), 54(4), 1–36
Clifton, C., Kantarcioglu, M., Vaidya, J., Lin, X., & Zhu, M. (2019). Tools for privacy-preserving distributed data mining. Journal of Knowledge and Information Systems, 59(2), 287–314