In recent years, personal data has been shared between organizations and researchers. While sharing information, individuals' sensitive data should be preserved. For this purpose, a number of algorithms for privacy-preserving publish data have been designed. These algorithms modify or transform data to protect privacy. While the anonymization algorithms such as k-anonymity, l-diversity and t-closeness focus on changing data to a protected form, the differential privacy model considers the results of queries posed on data. Therefore, these algorithms can be compared according to their performance or utility of the queries that have been applied on anonymized data or computed results with noise. In this work, we present a domain-independent semantic model of data anonymization techniques which also considers individuals' different privacy concerns. Thus, the proposed conceptualized model integrates the generic view of privacy preserving data anonymization algorithms with a personalized privacy approach.