/r/dictionaryofthings
A project to define the basic concepts that we use to understand and interact with the world around us, and how they relate to one another.
Hi there! Welcome to the Dictionary of Things!
This is a project created by u/Mynotoar to attempt to define the core concepts that shape our lives.
A core concept is any idea, quality, system, concept or class of things that affects:
For example, "Harry Potter" is not a core concept, but books and art are. "Corkscrew" is not a core concept, but tools are. "Darude - Sandstorm" isn't a core concept, but music is.
The dictionary has three goals:
Discuss, debate and disagree. I don't claim that my definitions are objective or correct - they are based on my subjective perspective, and they are designed to be challenged.
If you think I've got something absolutely wrong or haven't considered something important in a definition, I'd love to hear about it in the comments.
This project is a personal exercise in learning, understanding and reflection. I want to hear from as many perspectives as possible when building this dictionary.
If you have any ideas for the sub, please feel free to comment in the threads or send me a message instead.
Standard Reddiquette applies. Be nice.
Here is the index for all definitions.
/r/Dictionary - A community for fellow word-enthusiasts and logophiles, to talk about dictionaries, word lists and language resources.
/r/Word_of_The_Hour - A sub for sharing interesting definitions - one every hour - with translations into other languages. Also features an app!
/r/dictionaryofthings
An attitude towards something where we don’t appreciate it or consider it to be important, interesting or relevant, even if there is inherent value in it, simply because we are too familiar with it. For example, many people who have access to clean drinking water will take it for granted, meaning that they either don’t consider it important, or they don’t consider it at all. Those who do not have regular access to clean drinking water however, would consider it of great value and importance, and may even wonder why those who do have access to clean water aren’t more appreciative of the fact. It is easier to take something for granted when we come from a privileged position, and have not experienced hardship. Those who have suffered or been through hard times are much more likely to appreciate what they have and less likely to take small things for granted. Taking time to appreciate something that we might not normally appreciate can be a valuable way to practise gratitude and increase our levels of happiness.
If something can be described as subjective, that means that the way it is perceived, understood, conceptualised or generally thought about changes based on who is doing the perceiving and so on. For example, beauty is subjective - some people will consider a sunset to be beautiful, and some people will not. Even though the phenomenon is exactly the same - the Earth moving relative to the sun causing the latter to appear to dip beneath the horizon from where we are standing - one person’s interpretation and emotional reaction to that same phenomenon may be completely different to another person’s.
We usually contrast subjectivity with objectivity, which refers to something being the same no matter who observes it. A sunset itself, for example, is an objective phenomenon - it happens whether or not any people or animals are around to witness it, and it doesn’t change based on who witnesses it. But the perception of that sunset - the way we see it, and how we interpret it - is subjective. Different people have different levels of vision, and different animals have completely different systems of vision. Some animals, for example, can see in colours which humans cannot even perceive, and so these animals will have a different subjective experience of an earthly sunset than all humans will.
It is sometimes worth remembering how many aspects of our human experience are subjective and not objective. Not only is our perception subjective, but so too are most of our abstract concepts, such as morality and ethics. We typically think of certain actions being objectively “right” or “wrong”, but they are only so in our particular culture and community. In most Western countries for example, it is considered acceptable to wear clothes exposing one’s arms, legs and midriff - in many other countries it is considered immoral.
It can be helpful to understand subjectivity when talking about tricky concepts such as morality - if we accept that all moral rules are only subjectively true (not objectively,) then we are forced to come up with a justification for it. “One shouldn’t punch other people” is a moral rule we can justify, for example, by arguing that punching people causes physical pain, and as pain is undesirable, we should act to minimise other people’s pain. If a subjective moral rule does not have an obvious justification, then it may need reevaluating.
Achieving a desired outcome through an unfair or improper means. If you win a game of cards because you managed to look at the other player’s hand, you have cheated, because the “proper method” to win a game of cards is to use skill, knowledge and guesswork to make the correct plays, despite not having information about what cards your opponent holds. Cheating is considered an unfair and dishonest thing to do, because it violates the principle of equality that is used in games, tests, and any form of competition.
For example, if a group of students are taking an examination, they all have equal access to the information provided them by their teachers and school. If one student finds an answer key to the test and memorises it when taking the test, he then has an advantage which other students don’t have. The method is a valid means for achieving the desired result of completing the test, but because it violates the principle of equality, it is considered to be an unfair advantage and therefore cheating.
Cheating usually involves taking an easier route to achieve a desired outcome than the conventional method. This is because we frequently like to take shortcuts - if we can save the effort needed to achieve a task by using an easier method that achieves the same result, or near enough the same, then we will often choose to do so. So the test taker who wishes to achieve a good grade but doesn’t want to expend the effort of studying and improving his skills might consider cheating as a viable shortcut to achieve the grade.
This is then a calculated risk, as if cheating is detected in this case, it will likely result in the student being disqualified and receiving a fail grade, which is the opposite outcome that the student was trying to achieve. Thus cheating only occurs when the cheater thinks they will not be detected, or when they are confident that there will be no recourse for their actions.
Cheating is also a term used to refer to adultery, or the act of engaging in a sexual or romantic relationship with another person while one is in a committed (monoamorous) relationship. This can still be considered obtaining a desired result by unfair means - in this case, the means are unfair because they do not respect or value either the adulterer’s partner, or the person they are cheating on.
The act of engaging in an emotional, romantic or sexual relationship with someone at the same time as being in a form of relationship with someone else. Adultery almost always takes place without the informed consent of one’s original partner.
It is possible for one person to be in harmonious relationships with more than one person at the same time, such as when dating before committing to one partner in a monoamorous context, or when practising polyamory, meaning that each partner in a relationship is permitted to have other partners at the same time. In a monoamorous relationship, however, having an extra partner is not permitted by the terms of the relationship: it is considered a betrayal of trust, and therefore an extremely harmful act.
Adultery is one reason why many relationships end - when the adulterer’s partner finds out about it, it may cause irreparable damage to the relationship as the partner is no longer able to trust the adulterer.
The state of being an adult, which we can loosely define as someone who has reached full maturity, growth and development in terms of their physical, sexual, emotional and psychological characteristics. The age at which one attains adulthood is arbitrarily defined according to a particular culture - in many Western cultures, for example, the age of adulthood is 18.
Of course, no physical change takes place overnight on the day before one’s 18th birthday - the marker of adulthood rather serves to mark the point where a society believes a person should be fully matured, developed and ready to act independently from their parents.
This means that reaching adulthood carries many expectations about how an individual should behave. Actions that are forgivable when one is a child are no longer considered acceptable when one is an adult - for example, if a child kicks another child, they may be told off by their parents but not face any other consequences. An adult who kicks another person however may be found guilty of a crime, and could be punished by the law.
This is because adulthood - in many if not most cultures - generally entails a responsibility to act and treat others in certain ways, a responsibility which is taught to children gradually as they transition into adulthood. If we say that someone is “not behaving like an adult” then, we mean that they are acting as if a child might by behaving irresponsibly or behaving as if their actions will not have consequences. The individual expectations of an adult will vary widely between cultures.
In legal terms, a binding agreement that stipulates one or more parties must do something or avoid doing something else, which can be enforced by law. It is essentially a written form of a promise between two parties. If one party “breaches the contract” by failing to do what they have promised to do (or doing something they have promised not to do,), then theoretically they can be punished through legal actions, such as the opening of a lawsuit, or being taken to court to rule on the infraction. For example, if you paid a builder to repair a broken floor in your house who came and removed the carpet, but then left and never completed the job, you might be able to sue her for a breach of contract, as she did not do what was promised by completing the work she was paid for.
A contract only has power because we say it does. If a promise between two parties is not able to be enforced by a third party (such as a court of law,) then promises can only ever rely on trust between the two parties. It is not always possible to trust the other party when the exchange of money is involved, as either or both sides might be willing to cheat the terms of their promise in order to maximise their own personal gain. While the builder might be willing to abandon a project they were paid to do and keep the money, a customer might equally be unwilling to pay the builder until after the work is done, and come up with excuses as to why he cannot pay her. Thus contracts exist to protect the interests of both parties.
We can also apply the concept of contracts to morality. One can argue that we only act in moral and ethical ways towards one another because we want others to treat us in the same way - known as the “principle of reciprocity”. If all people behaved immorally, then we would expect people to behave immorally towards us; so to avoid this, we aim to behave morally towards others to encourage reciprocal moral behaviour. This may at times contradict one’s personal wishes - someone who wants to make a lot of money might not scruple to cheat, steal or exploit others to get it. But, in addition to the deterrence of laws and moral rules which prohibit these behaviours, the potential cheat knows that if everyone cheated, nobody would benefit at all. For this reason, we could argue that morality is a “social contract” which represents an agreement between all people to behave morally, in order to maximise the benefit for everyone.
The concept of giving and receiving in equal measure: of ensuring that a kind action or deed is met with a kind action. The notion of reciprocity is incredibly important for developing effective relationships and friendships. A reciprocal relationship (whether romantic, sexual, friendly or otherwise,) is one in which both partners put in an equal effort - conflict often arises when this reciprocity doesn’t exist, if one partner is putting in less effort than the other partner.
Reciprocity is important in relationships because we rarely treat our friends and partners with kindness without the expectation that they will treat us in the same way. If such a relationship exists where one person does not reciprocate any kindness, thoughtfulness or consideration that their partner gives them, we might consider that person as exploiting their partner, as ideal relationships are almost always founded on a principle of equality between partners.
The "principle of reciprocity" is a key concept in the study of morality and ethics, meaning that we act in virtuous ways towards other people, and expect others to act virtuously towards us in return. Thus reciprocity is one reason why most people do not commit crimes - if we stole, hurt, killed or abused other people with regularity, then we would expect other people to do the same things to us.
Ethics is a term for a system of beliefs about morals, which are ways that one is expected to live. Morality refers to the act of behaving consistently with a particular ethical system, which we call being “moral”. Morality is extremely complex, as it is arguably not a natural behaviour, but instead comes as a result of humans agreeing to a principle of reciprocity: treat each other as one would like to be treated. This results in a contract founded on trust, wherein we all trust that other humans will not act in a way that endangers our own safety and security. When a significant number of people violate this trust, then we no longer feel safe and secure, as we cannot guarantee that other people will respect the established rules of morality. Thus, morality as a system must necessarily involve near-unanimous agreement about rules for living.
Morality is typically defined by “rules”, which either allow, encourage, oblige or prohibit a particular action or set of actions. These rules are expressed in the general case - rather than saying “John mustn’t kick Sarah”, for example, we might say “You mustn’t kick other people.” Thus it is clear that the act we call morally “wrong” is not that John kicked Sarah, it’s that he - as a human being - kicked anyone at all. It would be no more acceptable if he kicked Natalie or Adam instead. We often describe actions as “right” or “wrong”, meaning they either conform to a moral rule, or they do not. Thus we say that it was “wrong” for John to kick Sarah.
It is worth remembering, however, that these rules aren’t written down anywhere in nature - they are artificial rules that humans have created, in order to promote a peaceful co-existence with other people. Thus, we frequently disagree with other people about what moral rules are, and whether acts break or conform to those rules. A customer might feel cheated when a repairman charges twice what he originally asked for a particular service, declaring the moral rule that “One shouldn’t break a promise,” and asserting that the repairman has broken this rule. The repairman can argue that he hasn’t broken the customer’s rule, as the repair job turned out to be more expensive than initially thought. Or the repairman could counter with a different rule, that “One should always act in ways to maximise personal benefit,” arguing that doubling his price is consistent with this rule.
Because disagreements such as this one can potentially be irresolvable when two parties have different expectations, beliefs and needs, we have created laws to refer to, which are designed to codify a particular moral rule and determine when it has been flouted. The law provides an external third party who has nothing to gain or lose from the conflict between the customer and the repairman, and thus is not likely to be influenced to favour either party in the disagreement. Thus, if the repairman has written his initial price down in a legally binding contract, and later attempted to charge the customer double this written price, the customer can argue that he has broken the law by going outside of the terms of the contract.
Laws are still not perfect embodiments of ethical or moral systems - as laws are written using language which can inevitably be interpreted in more than one way, it may not always be clear how to apply a law to a given situation. However, some form of third party is usually needed to resolve conflicts when two people disagree about a moral rule, and law is perhaps the strongest when a law exists within a society to address that conflict.
While there are many different ethical systems, perhaps one of the most commonly accepted rules is “Do no harm to others”. This is vague enough that it is very easy to argue about whether the rule has been broken (What constitutes harm? What about exceptional circumstances? And so on,) but it is a guiding principle that informs the majority of ethical systems used by humans.
Any act where a human causes harm to another human or animal. This form of harm can be verbal, emotional, sexual, psychological, physical, or any other form of treatment where an individual actively hurts another individual using some means. The term often refers to a sustained pattern of abuse from one individual to another - for example, physical or sexual assault over a period of time - although a single act against an individual can constitute abuse.
Some types of abuse are widely considered to be a crime - especially violence and sexual abuse of any kind - although the levels at which this crime is enforced, punished and prevented vary widely between different societies with different laws. Other types of abuse, such as psychological, verbal or emotional abuse, may not be a punishable act in some places, even though they can often lead to lasting harm in the same way as other forms of abuse.
Someone who has experienced abuse is typically called a victim, although they may or may not wish to use this label. The experience of abuse can frequently lead to lasting or irreversible damage in terms of one’s psychological and/or physical well-being, and often requires counselling or psychotherapy to allow the experiencer to come to terms with what happened and improve their well-being. Abuse can also severely damage one’s self-esteem if one is incorrectly made to feel at fault for the abuse, and may even damage one’s reputation in some societies which do not have sufficient protections in place for those who have suffered abuse. For this reason, many victims of abuse might choose not to report the abuse to a friend or authority, or may not even feel safe to do so.
It is hard to fully capture the reasons why humans commit acts of abuse towards other humans, especially without engaging the (perhaps tempting) tendency to demonise abusers and label them as “evil” with no further analysis. Some people do commit acts of abuse because they truly do not consider their actions to be morally wrong, and do not consider the potential damage they will cause their victim. Some others do consider their actions to be harmful, but may enjoy causing harm and do so for this reason - this tendency we sometimes label “sadism”. Others - who may fall into either of the previous categories - may have been abused themselves, so have learned and normalised abusive behaviours, which they repeat. This is known as the “cycle of abuse”, and occurs often when a victim has been abused from a young age, before they understand what constitutes an acceptable treatment from other people. Still others may commit acts of abuse out of desperate circumstances, a poor socio-economic situation, or poor emotional health, prompting those individuals to lash out at others nearest them.
Abuse is often found in families or marriages, as these are the most typical situations where the victim is not easily able to get away from their abuser, especially if the victim is a child and has no resources or guardian protecting their welfare, or a marriage where one spouse is financially or emotionally dependent on an abusive spouse.
Abuse cannot be justified according to most ethical frameworks, which define an active choice to harm another person or living creature as morally wrong. There are almost no situations in which the act of abuse is not an active choice, other than true “insanity”, where the abuser is not able to understand what they are doing, or rare cases such as those who hurt others while sleepwalking. However, understanding the reasons behind abuse may help to prevent further acts of abuse.
While the most egregious forms of abuse are criminalised, it is often too late to reverse the damage caused by an abuse after it has been identified, and the abuser punished. It is for this reason that many argue that preventative measures need to be taken to prevent abuse, by improving education and living standards, and ensuring that children grow up understanding and internalising ethical and kind ways to treat other people. “Do no harm” may seem an obvious ethical principle to follow for most, but it seems that there is more work to be done in teaching this principle to those who do grow up to eventually harm others.
An adjective to describe something that doesn’t exist as a concrete or physical object in the world, but rather is an idea, concept, category, schema, field of study or something else. For example, the word “happiness” is abstract, as it does not refer to any one particular thing, but rather a collection of chemical and biological processes that make up the emotion of happiness.
If it is possible for a proposition to be proved false, then it is falsifiable. This does not mean that the proposition actually is false. It only means that there is some way in which you could theoretically prove the proposition wrong. For example, the statement “Adam has black hair” is easily falsifiable: to prove it false, you need to show using evidence that Adam’s hair is another colour besides black - for example, blond. It may be that Adam’s hair truly is black, but all falsifiability requires is that there is a test which could disprove a given hypothesis. Similarly, a statement such as “All swans are white” is falsifiable, as it would only take one black swan to prove the statement false.
Many propositions are not falsifiable. For example, it is possible that there exists an invisible dog, who is in the same room as you right now. You cannot see, hear, touch or use any other sense to detect the dog - no scientific apparatus can detect his presence, and your human intuition will be of no use either. But I can assert nonetheless that “There is an invisible, intangible, undetectable dog in the same room as you now,” and there is no way to prove this statement false. If the dog cannot be detected, then he cannot be proved to exist - but equally, if he were invisible and intangible, it would naturally follow that there was no proof of his existence. All objections or attempts to disprove the dog's existence are consistent with the hypothesis that the dog exists. Therefore you cannot prove that there is no invisible dog.
More sinister examples arise from fiction, such as the philosophical concept of a “brain in a jar”, which proposes the idea that all of our experiences on this world aren’t real, and that we are all simply floating brains kept in a jar, stimulated by electrical activity in such a way that causes our brain to believe it resides within a thinking, feeling human body, as opposed to a jar. This proposition is also unfalsifiable - any attempt you might make to disprove it will surely fail. You can use your senses - sight, touch, hearing and so on - to verify that you have a body. But under the “brain in a jar” hypothesis, all of these senses can be stimulated to fake the feeling of true sensation, using an intelligent machine feeding our brain electrical signals. Thus the idea that we are merely brains in jars is unfalsifiable.
The unfalsifiable ideas above are not useful for us in any way - the presence of an invisible or intangible dog stands to benefit nobody, and subscribing to the theory that we are merely brains in jars and reality is therefore fake might actively harm ours or others well-being, as we might act in selfish or hurtful ways if we believe that other people are not real. For such beliefs, although there is no means of proving them false, it may often be pragmatic to proceed on the assumption that they are false, as it is not useful or beneficial to assume their truth.
However, not all unfalsifiable beliefs are useless. Many axioms and assumptions that underpin various fields of study are unfalsifiable. Any claims about the past are unfalsifiable, as we cannot conclusively prove that they did not happen, and yet history is still an informative field. Mathematics, logic and other fields rely on basic axioms which are used to build larger systems - those axioms cannot be falsified either. They are simply assumed to be valid even if they are not testable.
Furthermore, the assumption that unfalsifiable beliefs are false comes into conflict with many common unfalsifiable beliefs, such as those of religion. The proposition that “God exists” arguably cannot be falsified, because the typical definition of God precludes the use of any human scientific tests to verify or falsify his existence. The Christian God, for example, is frequently said to exist outside of our universe. As this is somewhere that we cannot access by definition (as we have no knowledge of anything outside our universe, or even knowledge of whether anything can exist outside our universe,) we cannot prove that God doesn’t exist. For many Christians, this is taken to mean that we should accept God’s existence on the grounds of faith, and not merely evidence alone. This conclusion is unlikely to satisfy anyone who believes only in what can be known through physical evidence in the world (empiricism) or the use of reason (rationalism).
Falsifiability is a very useful guiding principle used in science to create testable hypotheses. For a hypothesis to be testable, there should be some way of proving it false as well as proving it true (both verifiability and falsifiability are important.) For example, the hypothesis “If I stick a needle in this balloon, it will burst” is a meaningful one, because it could be proved false (if you stuck a needle in the balloon and it didn’t burst,) and it can be proved true.
A term for any collection of more than one object which are considered to be linked, united or joined together according to an abstract principle. Groupings are arbitrary concepts - we can talk about a group of people, flora or fauna existing, but that group only exists because we have named it. We often talk about groups in the context of people who are linked by a common property or function. A musical band, for example, is a type of group often consisting of three to four people, linked by the fact that they all play music together at public venues in order to entertain others and make money. The group itself doesn’t depend on all of its members being in the same physical location, nor does the group cease to exist when the band members are doing activities other than playing music. The composition of the group doesn’t reflect any other natural properties of the people involved - it is a human label given to collect those people into a conceptual collection, based on the fact that they all perform music for a career.
Belonging to a group is dependent on everyone within the group recognising your particular membership - I cannot call myself a Frenchman if the nation of France does not recognise me as one of its citizens, for example. Joining a group is often a bilateral process - to become a part of a political party, nation or club, you must usually be accepted as a member by those who are already a part of the group. Leaving a group can often be a unilateral process - quitting a political party or religious group does not usually require the consent of other members. Thus, when we talk about “joining” or “leaving” a group of people, no physical or natural action is taking place - the action of joining or leaving a particular group is purely a linguistic act, which causes our psychological concept of the group’s membership to change.
In terms of people, there are many types of groups. Families are groups linked by blood relations; cultural groups are vaguer terms for people who all identify with a common culture; religions are groups linking together people with common beliefs; political parties are groups linking those with similar political beliefs, and so on.
Beyond groups of people, we have created groupings to describe all manner of objects in the natural world, such as animals and plants. The system of taxonomy is a field of study which gives names to all living creatures and places them in an interrelated hierarchy.
We can group together abstract concepts as well as concrete ones. Languages, beliefs, political systems, philosophy, fields of study or any idea at all can be grouped if one wishes - we describe the “Romance languages” for example, or “Neoliberal views”, or “modern schools of economics”, all of which are groupings of ideas under a conceptual heading.
We can also consider nations as types of groups, as a designation for a collection of people living within human-designed borders, under a set of common laws, and usually with certain languages in common across its people. Interestingly, nations themselves are also subject to grouping - the modern ideas of the United Nations, European Union, African Union, ASEAN and so on are all institutions which have grouped together a set of nations for political, trade, diplomatic purposes and so on. These groupings are institutionalised in that they have laws and regulations in place to specify the relationships between member nations, and the conduct of the group as a whole.
Other terms for groupings of nations include BRIC (Brazil, Russia, India and China) or the incredibly liquid term “developed country”, which are not institutional groupings, but rather designations for collections of countries which appear to satisfy a set of criteria such as economic growth or development. Often criteria for group membership are not fixed, meaning that which country is considered a “developed nation” for example, may well depend on whom you ask.
Another type of group is the company or corporation, which is a collection of people who all come together to fulfil a function for a job, such as manufacturing a product or providing a service. Cadbury’s is a company which manufactures chocolate - its existence is predicated on many things, such as having factories which make the chocolate, stores to which it ships and sells its product, but principally, it consists of people who are paid for the job of manufacturing, packaging and distributing the chocolate. If all of the people who worked for Cadbury’s ceased to exist, so would the company. This grouping is not conditional on the company’s premises, factories, distribution lines or forms of marketing, but it exists solely because of the people who are paid to fulfil a particular function within that company, and the relationships between those people.
What this shows is that a group - any group - can be institutionalised to any extent, in order for the “group” to execute functions as opposed to an individual member. For example, the “band” will go on tour and perform at a concert, not the four people who comprise it. The “European Union” will make a decision, not Latvia, Germany, France or Estonia. This is a polite and well-respected fiction which allows us to mentally conceptualise groups as if they were individuals - we can ascribe characteristics to nations such as “aggressive”, “peaceful” or “friendly” that we would primarily use to describe other people. It is worth remembering that these labels are all acts of metaphor - that nations are an abstract and invented term to describe very large groups of people who all act and think differently and possess vastly different characters. But the group label allows us to join these disparate people under one heading and understand them all as “French”, “Albanian”, “Chilean” and so on. This can often lead to conflict, and beliefs such as xenophobia, when we form a negative attitude against people from a particular group based on the perceived characteristics of the whole group, without considering the differing individuals within that group.
At the same time, a group does not have to be institutionalised to exist. I can create the “ABC” group to describe the three nations of Albania, Bahrain and Canada, and while it is not institutionalised in that nobody else understands or respects it as a group, and no conventions exist which link these three nations, it is still a group in that I have named it as such. Even if a group is institutionalised to some extent, it cannot ever be given concrete existence - a group will only exist because we say it does.
This shows that a group is by its nature an arbitrary invention - it has power only because other people believe and respect in its existence, and form conventions about how the group functions in relation to other people and groups, as well as conventions about how members of the group relate to one another. Bands, nations and companies do not exist in the concrete world, they are simply labels to describe a set of people achieving a common function.
A term for a large range of processes which involves transforming ingredients to make food which is edible and, ideally, delicious. Cooking typically involves using fire or another heat source to heat ingredients which have been mixed and prepared in a special way, although some forms of cooking, such as the preparation of salads, don’t strictly require heat. Humans have very flexible diets, and can eat a large range of food types, although experts typically recommend having a varied diet, moderating the amounts of certain food types (for example, limiting sugary and salty foods.
One’s food intake depends entirely on personal preference, habit, body image, goals and other factors. As humans cannot choose whether to eat – we must eat in order to survive – cooking and the preparation of food is an aspect of the life of every human on Earth, although the range of cooking methods and ingredients used will vary widely depending on culture, availability of different types of food, income and other factors. Cooking is considered a hobby for many, and a career for just as many – someone with the relevant skills and knowledge can be paid in almost any location in the world to cook and prepare food in restaurants, hotels, bars, private parties, for individuals, and in many other scenarios.
The notion of having the absolute unhindered ability to make human choices without constraints. The notion of free will is often held to be a vitally important principle in Western and individualistic societies, which should be defended against all those who want to restrict our free will. However, the structure of society is not actually designed to accommodate absolute free will, and in reality, choices are always limited by external factors.
Humans are primarily limited by the laws of physics, which dictate things they absolutely cannot do, such as running faster than the speed of light, or walking on air. Humans are also limited by laws put in place by individual societies, which prohibit particular actions and put into place punishment systems such as imprisonment, fines and community service in order to discourage humans from committing those acts. Murder, stealing and arson are all prohibited under these acts, and punished accordingly. This includes limitations on free speech, an ideal which also cannot be truly freely exercised – there are laws, for example, which prohibit someone from shouting “Fire!” in a crowded theatre, or “Bomb!” on an airplane, as both of these free speech actions would cause negative consequences.
Humans are also bound by the rules of culture, which, while they do not carry the same punitive weight as legal punishments, are often equally deterring for people who are bound by them. For example, there is no legal rule prohibiting men from wearing clothing typically worn by women, but in many patriarchal societies there is often a strong cultural prohibition against cross-dressing, which introduces a constraint on those who wish to exercise the otherwise free choice to do so.
Furthermore, we cannot simply do anything we wish to, if we do not have the ability to do so in the first place. It is not possible for someone with no ability to play the piano or read music to sit down at a piano and flawlessly perform a Bach concerto. Anyone who wishes to perform the piece is free to attempt it, but the required conditions that one must satisfy first are learning the skills needed to play the piano, and then learning the particular piece of music.
Thus, the idea of “perfect” free will, where one can do absolutely anything one wishes, does not and cannot exist. Rather we can say that an individual is free to do anything that they wish to that is not physically impossible, so long as they have met all of the necessary conditions required to do it. If their action violates the law, breaks the rules of the culture one lives in, or in some other way creates a negative effect, then one is not free from those consequences which will follow. Anyone is free to fire a gun and murder another human being, but they are not free from the consequences of being trialled, imprisoned and possibly executed for committing a crime. Anyone is free to shout “Fire!” in a crowded theatre, but is not free from the consequences of the panic and disorder that may result.
In philosophy, free will is often contrasted with the idea of determinism, which argues that no actions whatsoever are free, as all actions that we undertake are pre-determined by a complex causal chain of events leading up to that action. As determinism is arguably neither verifiable nor falsifiable (see the definition of determinism,) however, it may be safe to adopt the assumption that we do have free will.
There are also many pragmatic reasons for doing so. For example, if we assume that people do not have free will, then it follows that they do not have any responsibility for their actions, as they could not have chosen to act any differently. This means that we could not reasonably punish criminals for their actions - if they could not have chosen differently, then it was not their fault that they committed the crime. We could not have a functioning system of justice under a system of determinism, which would then leave most societies with no choice but to let criminals walk free.
Thus, we must assume that individuals have responsibility for their actions, which entails believing that they have the free will to act differently in every circumstance. Exceptions are sometimes made for those who act involuntarily, such as those who have committed a crime in their sleep without intention, or those who have suffered a mental illness which impaired their ability to control their actions.
Broadly, the use of one linguistic expression to indicate a different meaning from what it conventionally refers to. Metaphors and similes are commonly thought of as devices found in literature, such as “His look was icy” or “The ocean was raging”. These are both examples of metaphors, as an ocean is inanimate and incapable of feeling emotions, but the use of the verb “raging” allows the reader of the expression to imagine an ocean moving with intense force, as would a person who is “raging”. Similarly, a look cannot be hot or cold in the literal sense, but describing a look as “icy” communicates to most readers the idea of expressing displeasure on someone’s face.
However, literature is not the sole domain of metaphors - the ordinary English language is littered with metaphor, and it is not always clear when a metaphor has been deployed. See the previous sentence, for example, where at least four metaphors have been used. Indeed, avoiding metaphors is often the more difficult task.
Consider, for example, the ways in which we talk about time in English. One can “spend”, “save”, “waste”, “take”, “borrow”, or “give” time, and so on, all of which suggests that time is a form of currency which can be accrued, given, taken, used or misused. Literally, it is not - time proceeds linearly regardless of what we do within it - but the metaphor of “time as currency” is so thoroughly embedded in our language that it is very difficult to talk about time in a way that avoids metaphor. Such expressions can be called “dead metaphors”, as we no longer tend to consider these expressions metaphorical.
We typically contrast “metaphor” with “literality” when discussing an expression, the latter of which means using the ordinary meaning of the expression. For example, in the expression “The ocean was raging", “raging” is a metaphor used to communicate violent and forceful movement. In the expression “The man was raging”, “raging” is used to describe a literal action of expressing the emotion of extreme anger.
Metaphors are not the random use of any expression to stand for any given meaning - one would most likely not be understood if they said “His look was biscuity” or “The ocean was shopping”, for example. A comprehensible metaphor must have some connection in meaning between the expression chosen as a metaphor and the meaning it is chosen to represent. For example, the expression “The ocean was raging” communicates its meaning successfully because we understand the concept of a person expressing extreme anger and moving about violently, and so we can interpret this metaphor to mean that the ocean was similarly moving about in a violent and forceful manner.
Simile is a similar device to metaphor, in which one thing is compared to another explicitly. The difference between the two is that similes state one thing to be similar to another, whereas a metaphor states that one thing is another. A simile might be “His eyes were like diamonds”, whereas an equivalent metaphor might be “His eyes were diamonds.” A reader knows that both similes and metaphors are not logical propositions and are not supposed to be interpreted as literally true - the person’s eyes which were described are not in actuality hard stones made of carbon.
Thus, when similes and metaphors are used in literature, they often contain nearly the same meaning. Sometimes a metaphor might be considered stronger than a simile - one politician wishing to slander another might sound more forceful saying “He’s a demon” than “He’s like a demon” - but the overall effect of both is to compare one thing to something else by calling on a set of shared characteristics both possess.
A cognitive process wherein one previously held an idea, opinion or belief, but external inputs cause them to alter that belief, or abandon it entirely.
People often change their mind after engaging in arguments with people who hold a different belief to their own, or after learning about a different perspective which introduces ideas that they hadn’t taken into consideration.
Often, we tie together our viewpoints and beliefs with our personal sense of self-esteem or pride, and it can be a very difficult thing to accept an alternative viewpoint to our own when this is the case, as doing so means admitting the possibility that we were initially mistaken or misinformed. Some might consider being “wrong” as injurious to their pride or damaging to their reputation, as they might believe that others will judge them for their mistake. These people may therefore wish to avoid being seen as wrong at all times. It is for this reason that changing one’s mind can often be a very difficult and brave thing to do, as it requires putting our pride aside. Some argue that a key quality of open-mindedness means being able to do just this: to change our minds when the facts lead us to a different conclusion than we originally held, without fear of being judged for being “mistaken”.
An openness to changing one’s mind is seen in some Western cultures as a virtuous and desirable quality, and in contrast, an unwillingness to change one’s mind can often be perceived as a mark of stubbornness. Others may not agree, and might see changing one’s mind as an indication of a lack of conviction or excessive agreeableness. Regardless of its moral significance, changing one’s mind is not easy to do if the belief we are changing is important to us, or connected to our personal self-esteem.
A state when two humans hold beliefs, opinions or ideas which are not necessarily fully compatible with each other. Both humans tend to believe that their own idea is fully correct, and the other person’s idea is not correct or somehow misinformed. Disagreement is a regular part of every-day life, and when applied productively and respectfully, can lead to debates and discussions which allow for ideas to spread, and perhaps even for viewpoints to change.
However, it is equally possible for disagreements to foster negative emotions, and when disagreements cannot be resolved, lead to one side attacking the other in some way. Disagreements in their most extreme form can lead to war, when two or more nations or smaller states cannot resolve a disagreement by discussion or other means.
Disagreement is often irreconcilable, as some people will not necessarily change their mind after they hear an opposing view-point. However, disagreement can be resolved when both parties are willing to listen to one another, take the other’s viewpoint seriously, and work to come to a mutual understanding of the matter at hand.
Literally, having awareness of one’s surroundings and being responsive to stimuli. The idea of consciousness is also used to mean several abilities supposedly unique to humans, such as independent thought, sapience, selfhood, control over the mind, etc., in order to distinguish ourselves from animals. However, it is not at all clear to what extent animals have consciousness – partly because our own definition is arbitrary and set by humans, but also because animals do not possess language in the same way that humans do, and cannot communicate to us their thoughts and feelings in order to demonstrate sapience, selfhood and so on.
An abstract concept denoting the fact that a large number of humans are aware of an individual’s existence - such a person is said to be “famous”. This usually follows from the accrual of status, prestige, or significant amounts of wealth, although it is possible to become famous by many other means as well. If someone is well known for something negative, such as having committed a crime or caused offence, then they are often said to be “infamous”, which simply means that their fame is the result of a bad thing.
Fame is often considered to be highly sought-after; many people wish to become famous, as having other people recognise you for your accomplishments, or even simply acknowledging your existence, can raise your own self-esteem. However, fame can have negative consequences regardless - those celebrities who are most famous do not have much privacy, as the media will often report their everyday actions and deconstruct their private lives, which can seem invasive.
Someone who is famous can sometimes lose their fame, status and privileges through a careless comment or thoughtless action - as such, famous people are usually held to a much higher standard than those who are not famous. This relates to the concept of having a platform - famous people are in a position to reach a large number of people through their words and actions, and can choose to use this platform for benevolent means. Celebrities can also abuse this platform, and use it to spread malicious thoughts or ideas, or to abuse individuals through their actions. For this reason, the acquisition of fame is a huge responsibility to the individual who becomes famous.
Although the definition of art is not without considerable controversy and the topic of philosophical debate, we can loosely define art as anything produced by a human to be read, viewed, listened to, or engaged in by any other means by other humans. This includes paintings and sculptures, which are traditionally and easily recognisable as art, produced by an artist for other humans to engage with. Art can also take place in the form of literature, or stories told for other humans, as well as music, theatre, film or any other medium that allows an author or multiple authors to express ideas to an audience. Some examples of art are further from the prototypical definition of art, and therefore may be accepted by some but not others as an example of “art”. The prototypical definition of art will also depend on the culture, in which some forms of artistic expression are more common than others.
The concept of doing bad things deliberately, or intentionally engineering negative consequences, simply because one enjoys it, or wishes to make personal gains through these illicit means, is typically described as evil. The concept is difficult to pin down, however, as it is a subjective term that an individual would very rarely use to describe themselves; rather it is a term used by members of a society collectively to judge an individual whose actions fall out of line with their collective values.
To Western societies for example, infamous dictators such as Hitler, Stalin and Franco were “evil” men, although the men themselves and their supporters at the time would not have considered their actions to be bad, as they were morally defensible from their own perspective.
Evil is popularly portrayed in fiction and literature as an inherent quality within certain characters, which predisposes them to commit “evil” actions, and is often given no further justification. Thus antagonists in literature are often “evil for the sake of evil”. Real life is more nuanced and complex; people are rarely “evil”, but rather make decisions based on their circumstances, upbringing and personality which they may see as justified, and others, abhorrent. Evil in real life is usually possible when one does not have an understanding of the impact that one’s harmful actions can have on other people, or when one does not care.
An opinion or judgement - usually unfavourable - against an individual or group of people, that is made before an individual necessarily has sufficient evidence to make said judgement. For example, if a Briton believes that all French people are stupid and lazy, this is an example of prejudice, as the Briton most likely has not conducted a representative study of the IQs and work ethics of the entire French nation. Thus prejudices reflect pre-judgements, or judgements made before the justification.
Prejudices are often - as in the above example - generalisations or over-generalisations, which attempt to characterise a large group of distinct individuals according to a set of common criteria. While prejudice can be individual - people may pre-judge others for a variety of reasons - often these judgements reflect either conscious or unconscious beliefs about certain groups of people as a whole.
A definition is an explanation of the meaning of a word or concept. It comes from consensus, which means that there is very rarely “one” definition for any term. Different people will have different interpretations of what a term means, often without knowing it - only when these ideas come into conflict do we see that people’s subjective definitions differ. Because language changes all the time, definitions change, too - all definitions are tied to a particular language, culture, context, place and time.
There is never one precise way to define a term. Because we have no way to access “pure meaning” without the use of language, in order for speaker A to explain what one word means to speaker B, speaker A must necessarily use other words which both speakers already know. To define “tree”, you might invoke the terms “tall, leafy plant” or a “a wooden structure containing leaves and bark”, or something else. While there is a limited subset of words that can be used to accurately capture the concept of a “tree”, the key point is that the definition is not a fixed property of the word “tree”, it is an arbitrary explanation created by a speaker for a listener, and depends on both participants’ knowledge of all terms used in the definition.
It is for this reason that children do not learn language purely by memorising the definitions of words - because in order to understand the definitions, they must first understand other words, and how would they learn those words in the first place? Instead, children learn language predominantly through context and identification - if the mother points to a tree and says “tree”, the child will learn the label for that particular tree.
As the child sees more examples of trees, and also learns what is not a tree (for example, if the child points at a bush and calls it a “tree”, he will be corrected,) his internal knowledge of the “tree” concept will grow, until he has an innate and natural understanding of “treeness”. It is only once the child is older and possesses both a large vocabulary and a linguistic skill that he might be able to provide a definition of a tree for another speaker, and even then it may not be obvious how to do so.
What this demonstrates is that the definition of language is in some sense an invented property, or a means that we have created to teach people new concepts; it is not an inherent property of a word. In some sense, definition is not a natural act, and it is for this reason which we often rely on dictionaries to provide codified definitions of words, rather than create our own on the spot. Many concepts are incredibly difficult to provide any meaningful definition for at all without linguistic training and knowledge - how, for example, would you provide a definition for the words “that”, “about”, or “should”?
The subjective nature of definitions means that creating dictionaries - including this one - is essentially an arbitrary act guided by one’s intuition and consensus with other people. It is not a scientific method which anyone can repeat and get the same results. One’s own definitions of terms will also be influenced by one’s beliefs, culture and outlook. Although we usually strive for objectivity in defining the world around us, it is often difficult to avoid these subjective influences. It is important to note also that definitions and dictionaries provide a momentary snapshot of a small portion of a language in time. Dictionaries are never fully accurate - they do not and cannot define all words used by humans in any one language, as the words we use are constantly changing in meaning.
Something which provides either a positive benefit to encourage someone to do something, or a negative penalty to discourage someone from doing something. Incentives are commonly used as a tool to help motivate other people and ourselves, as people frequently complete tasks and actions for other people on the basis of incentives. A direct incentive may be some financial compensation, or receiving some item or service for completing a task. An indirect incentive might include a long-term benefit, such as the promise of a direct benefit in the future, or a reciprocal exchange, where A completing a task for B will make B more likely to complete a task for A in future.
Examples of negative incentives include the existence of a system to punish people who commit crimes. Most people are deterred from committing acts such as murder because these crimes carry a severe penalty in being deprived of our freedoms and put in prison. A lesser example includes efforts taken in England to stop people from smoking, by placing unpleasant images of the damage done by long-term smoking on packets of cigarettes. This negative incentive attempts to deter people from smoking by forcing them to confront the health risks.
A type of relationship wherein two people share a close connection that is not necessarily romantic or sexual in nature. Friendship typically involves a high level of trust, familiarity, and often empathy with the other person. A good friendship, as in any good relationship, is arguably characterised by an equal reciprocity: for example, supporting and being supported, giving and receiving, trusting and being trusted, all aspects of a relationship are important to be in balance, otherwise a friendship is unequal. If one friend supports the other but is not supported sufficiently in turn, the friendship may be unequal, and one member may indeed be exploiting or taking advantage of the other. Equally, if one friend does not trust another, there is a lack of balance in the relationship.
Two very closely related concepts, trusting someone means having a certain knowledge that that person will not seek to hurt, undermine, disappoint you or treat you in a way that is not appropriate. Trust can operate on a romantic level, wherein partners trust one another to stay faithful, exclusive and honest in monoamorous relationships. Or trust can operate on a working and professional level, wherein two co-workers or partners trust completely in one another’s methods, and that both can execute an idea according to plan. While trust is based on certain knowledge of a partner, one can argue that faith necessarily involves uncertainty: for example, it is not certain that your partner will achieve their dream job, but you have faith in their ability to do so. It is not certain whether God exists, but you may have faith in his existence. The meaning of faith can differ between individuals, however.
In its strictest sense, this refers to having faith or trust in another person, system, entity, or higher power. In relationships, this refers to both partners being faithful to one another, which doesn’t necessarily only reflect the fact that they have faith in one another, but rather that they both act in a way which does not hurt their partner or violate their trust. For example, in a monoamorous relationship, faithfulness usually entails, among many other aspects, sexual emotional and romantic exclusivity, or not having an interest in people outside of the relationship. In a polyamorous relationship, being faithful may entail honesty and open communication with all partners about aspects of their relationships.
Fidelity is a related concept to faith and can be used in the same manner as above, but also has another meaning, of the exactness to which a representation of something relates to the original. For example, a photograph as a representation of a scene, object or person generally has high fidelity - it represents the subject of the photograph in a way that is similar to how we perceive it using our eyes. A painting may have more or less fidelity to its subject - by its nature, art does not represent a subject exactly, but highlights and distinguishes important features according to the artist’s wishes. (See Abstraction, model.)
A desire to develop a certain skill, enter a certain career or fulfil a particular purpose or calling. An ambition is broadly a type of goal, generally one that defines the thing or things that an individual most wishes to do in their life, such as the attainment of a dream job or dream home, or fulfilling a goal with significant importance to that individual. Humans are free to develop and pursue any ambition within the laws and rules of a particular culture.
Sometimes there are constraints that make a particular ambition harder to achieve, or impossible to achieve without incurring significant loss in the particular environment where one lives. For example, for those gay and lesbian individuals living in countries where same-sex relationships are criminalised, and for whom their ambition is to find a stable partner, this ambition cannot be realised in their environment without harm as it carries a legal penalty. Others may be limited in their ability to realise their ambition by socioeconomic factors - those who live below the poverty line and are struggling to live may not be in a position to achieve whatever they wish until their circumstances change.
Marketing is the practice of telling other people about a product, good or service that someone wishes to sell, on a large scale. The intent of marketing is to persuade people on a large scale to purchase the good or service. Marketing frequently exploits aspects of human psychology in order to reach the largest audience to sell their products.
“Marketing” refers to the broad practice of preparing a product for the market by informing potential consumers about it and persuading them to purchase it. “Advertising” is the term used to describe specific and direct actions taken in order to persuade potential customers to buy a good or service. Some of the most obvious forms include broadcasting a video - an advertisement or “ad” - on television or the internet in order to inform people about the product. Other forms include printed adverts in magazines, on posters and billboards, or dynamic advertisements embedded on internet websites, which users can click on in order to reach the product. Face-to-face advertisement is a more direct form, where employees of marketing companies try to reach consumers out in public and persuade them to buy a product.
Methods of advertisement are continually becoming more innovative: the proliferation of the internet, paired with the ability of private companies to collect vast amounts of data based on an individual consumers’ browsing habits, allows such companies to target consumers with ads that are more likely to appeal specifically to that consumer. Before this innovation, television, print and other forms of advertisement had to rely on disseminating adverts as widely as possible and hope that their target demographic (the people who would be most interested in the product,) receive the advertisement. Internet advertising now allows marketers to target consumers much more specifically. Some countries have data protection laws such as the GDPR in the European Union, which seek to limit the ability of private companies to collect and utilise consumer data in advertising.
A form of entertainment consisting of moving pictures and sound, for the purpose of telling a story or narrative. Films are considered a kind of art. People who are featured prominently as characters in a film are actors, who the audience understand are portraying characters, not their true selves. Similarly, audiences understand that films do not have to mirror real life, and unrealistic or fantastical things can occur in films that would be impossible outside of the narrative.
Thus films involve a suspension of disbelief, both on the most basic level of understanding that the human actors are representing fictitious characters, and in the structure and content of the narrative. A film is also known as a movie.in the U.S.A. and other English-speaking countries.