Within the field of automated planning, two areas of study are planning with preferences, and epistemic planning. Planning with preferences involves generating plans that optimize for properties of the plan instead of, or in addition to, trying to reach a fixed goal. Epistemic planning allows for planning over the knowledge or belief states of one or more agents for the purpose of achieving epistemic goals (where agents have particular states of knowledge or belief). In this paper we motivate and explore the task of planning with epistemic preferences, proposing a method by which existing automated planning techniques can be combined for this purpose. Epistemic preferences may better allow for representing what humans want, and have benefits for AI safety.