As a math guy who loves reality tv, I was also drawn to the show and wrote a blog post [0] about how to programmatically calculate the probabilities as the show progresses. It was a lot of fun optimizing it to be performant. You can `pip install ayto` to use it to follow along with the show or try out scenarios.
The linked post is a very thorough treatment of AYTO and a great read. I really like the "guess who" bit on how to maximize the value of guesses. It's a shame the participants aren't allowed to have pen and paper—it makes optimization a lot trickier! I'm impressed they do as well as they do.
And sometimes they just don't do better as a plot point, staying together an extra week after finding out they are not the one because of the intensity of their love (they met 4 days before)
This was great, but it skipped over the most interesting bit - how you actually choose which matchups and truth booths. That is, an actual strategy that contestants could use that doesn't require a computer.
Thank you! This is consistent with feedback I got from the pudding, and is ultimately the reason they didn't go ahead with the post. I tried reverse-engineering the information-theory approach to try see what sort of decisions it made.
I noticed that for any match up score of X, the following match up would keep exactly X pairs in common. So if they scored 4/10 one week, they would change 6 couples before the next one. Employing that approach alone performed worse than the contestants did in real life, so didn't think it was worth mentioning!
After he described the rules, my immediate reaction was 'this is like mastermind'. Sure enough, further down the page:
Other than that, in my research I came across a boardgame called Mastermind, which has been around since the 70s. This is a very similar premise - think of it as "Guess Who?" on hard mode.
When my wife and I watched the show I wrote a solver on the side so we always had the current probabilities and impossible combinations on the side.
I am thinking about making a website for it when the next season starts.
Also: in Germany at least they have 10 x 10 candidates from the start, but sometimes they add a 11th or even 12th of one gender so that there are double matches (e.g. 1 woman has two man as match and needs to find one of it to succeed). This raises the possible combination quite a bit.
Yes there's a gender fluid season and a season where someone had > 1 match, as well as people leaving part way through the season (apparently perfect matches are interchangeable...). All very interesting spins on the core problem to solve; would be really interested if anyone tries to tackle those seasons.
They have an example that calculates the expected information gained by truth booths and all of the top ones are giving more than one bit. How can this be? It is a yes/no question a max of 1 bit should be possible
We have to make a distinction between "expected information gain" vs "maximum information gain". An answer of "yes" generally gains >1 bit, but an answer of "no" generally gains <1 bit, and the average outcome ends up <1. It is impossible for a single yes/no question to have an expected gain of >1; the maximum possible is precisely 1.
The total probabilities add up to 1. But I’m not following how that relates to the average bits.
Despite summing to 1, the exact values of P(true) and P(false) are dependent on the options which have previously been discounted. Then those variables get multiplied by the amount of information gained by either answer.
It is definitional, which I mean in the strictest mathematical sense: the information content of a result is directly derived from how “unexpected” it is.
A result which conveys 2 bits of information should occur with 25% expected probability. Because that’s what we mean by “two bits of information.”
The article states "Suppose we have calculated the expected information gained by potential truth booths like below: Expected information: 1.60 bits ..." This is impossible because of the general fact in information theory that (p(true) * bits_if_true) + (p(false) * bits_if_false) <= 1. If they had said "Suppose we have calculated the maximum information gained...", then 1.6 bits would be valid. They said "expected information" though, so 1.6 bits is invalid.
So, you have n options, you ask a question, and now you're down to m options.
The number of bits of information you gained is -log₂ (m/n).
If you ask a question which always eliminates half of the options, you will always gain -log₂ (1/2) = 1 bit of information.
If you go with the dumber approach of taking a moonshot, you can potentially gain more than that, but in expectation you'll gain less.
If your question is a 25-75 split, you have a 25% chance of gaining -log₂ (1/4) = 2 bits, and a 75% chance of gaining -log₂ (3/4) = 0.415 bits. On average, this strategy will gain you (0.25)(2) + (0.75)(0.415) = 0.8113 bits, which is less than 1 bit.
The farther away you get from 50-50, the more bits you can potentially gain in a single question, but - also - the lower the number of bits you expect to gain becomes. You can never do better than an expectation of 1 bit for a trial with 2 outcomes.
(All of this is in the article; see footnote 3 and its associated paragraph.)
The article explicitly calls out the expectational maximum of one bit:
>> You'll also notice that you're never presented with a question that gives you more than 1 expected information, which is backed up by the above graph never going higher than 1.
So it's strange that it then goes on to list an example of a hypothetical (undescribed, since the scenario is impossible) yes/no question with an expected gain of 1.6 bits.
That's not how information works. Learning more from one outcome than the other decreases the probability of that outcome occurring, so the expected information (which is the sum of the outcome probability times the outcome information for each of the two possible outcomes) is always less than or equal to one.
If all you can get is a "true" or "false" you expect, at most, one bit of information.
Right - but coming back to the original question, if I'm not mistaken, the explanation is that the blogpost is measuring information gained from an actual outcome, as opposed to _expected_ information gain. An example will help:
Say you're trying to guess the number on a 6-sided die that I've rolled. If I wanted to outright tell you the answer, that would be 2.58 bits of information I need to convey. But you're trying to guess it without me telling, so suppose you can ask a yes or no question about the outcome. The maximum of the _expected_ information add is 1 bit. If you ask "was it 4 or greater?", then that is an optimal question, because the expected information gain is min-maxed. That is, the minimum information you can gain is also the maximum: 1 bit. However, suppose you ask "was it a 5?". This is a bad question, because if the answer is no, there are still 5 numbers it could be. Plus, the likelihood of it being 'no' is high: 5/6. However, despite these downsides, it is true that 1/6 times, the answer WILL be yes, and you will gain all 2.58 bits of information in one go. The downside case more than counteracts this and preserves the rules of information theory: the _expected_ information gain is still < 1 bit.
EDIT: D'oh, nevermind. Re-reading the post, it's definitely talking about >1 bit expectations of potential matchings. So I don't know!
It's not a yes/no per contestent, it's per edge between contestants. There are n(n-1)/2 of these.
A true answer for a potential match is actually a state update for all of the (n-1) edges connecting either contestant, that's 2(n-2) edges that can be updated to be false. Some of these may already be known from previous rounds' matchups but that's still more than a single binary.
An answer of "yes" will generally eliminate many edges, with potential for >1 bit. However, an answer of "no" will generally eliminate just that one edge, which is generally <1 bit.
But you don't receive more than a single binary value; you get a yes or no.
If both of these are equally likely, you gain one bit of information, the maximum possible amount. If you already have other information about the situation, you might gain _less_ than one bit on average (because it confirms something you already knew, but doesn't provide any new information), but you can't gain more.
The claim was that one bit was the maximum amount of information you could gain, which is clearly false.
Just to make this unambiguous: If you ask me to guess a number between one and one billion, and by fantastic luck I guess right, your “yes/no” answer obviously gives me more than one bit of information as to the right answer.
> The claim was that one bit was the maximum amount of information you could gain, which is clearly false.
That's not what I see.
https://news.ycombinator.com/item?id=46282007They have an example that calculates the expected information gained by truth booths and all of the top ones are giving more than one bit. How can this be? It is a yes/no question a max of 1 bit should be possible
https://news.ycombinator.com/item?id=46282343the expected information (which is the sum of the outcome probability times the outcome information for each of the two possible outcomes) is always less than or equal to one.
The specific comment you replied to had one sentence that didn't say "expected" or "average", but the surrounding sentences and comments give context. The part you objected to was also trying to talk about averages, which makes it not false.
If both of these are equally likely, you gain one bit of information, the maximum possible amount. If you already have other information about the situation, you might gain _less_ than one bit on average (because it confirms something you already knew, but doesn't provide any new information), but you can't gain more.
Can’t gain more!
The core confusion is this idea that the answer to a yes/no question can’t provide more than one bit of information, no matter what the question or answer. This is false. The question itself can encode multiple bits of potential information and the answer simply verifies them.
I’m not arguing with that, it’s basic information theory.
One bit, however, is not “the maximum possible amount” you can gain from an oracular answer to a yes/no question. The OP covers exactly this point re: the “Guess Who?” game.
Yes, you learn more than 1 bit in that case. However, if you are told A is false, you still don't know whether B or C is true, so you gain less than 1 bit. Assuming A, B and C all have equal probability, your average/expected information gain is <1 bit.
If you ask the question "which of A, B, or C is true?" then you're not asking a yes/no question, and it's not surprising that you expect to gain more than 1 bit of information.
No, that doesn't make sense either. For a truth booth, you're taking all the possible pairing arrangements, and dividing them into two sets. After the answer, one of those two sets is false. There is no way that this can provide more than 1 bit of information.
The match-ups can however give more information, as it isn't giving a yes/no answer.
Fun post. I'd be interested to know: How many consecutive Truth Booths (or: how many consecutive Match Ups) are needed to narrow down the 10! possibilities to a single one?
Discussing "events" (ie, Truth Booth or Match Up) together muddles the analysis a bit.
I agree with Medea above that a Truth Booth should give at most 1 bit of information.
Agreed. There's an argument elsewhere about how a truth booth can possibly have an expected return of more than 1 bit of information, but in reality most of the time it's going to give you way less than that.
I saw an episode of this and felt the contestants didn’t seem that interested in winning the money. Just romance. I was curious how suboptimally they tended to play.
For a start, the setting is an emotive one. It's not just a numeric game with arbitrary tokens, it's about "the perfect romantic partner." It would take an unusually self-isolating human to not identify who they feel their perfect match should be and bias towards that, subconsciously or consciously. We (nearly) all seek connection.
Then, it's reality TV. Contestants will be chosen for emotional volatility, and relentlessly manipulated to generate drama. No-one is going to watch a full season of a handful of math nerds take a few minutes to progress a worksheet towards a solution each week coupled with whatever they do to pass the time otherwise.
I'd watch a game show where you put a variety of math nerds on each team and watch them argue about the optimal strategy. Who's strategy will win? The quant analyst or the bioinformatician? Tune in next week!
Watched a fantastic film about this on the plane a few years ago, "Liar Game - Reborn". There is some fairly sophisticated logic puzzling and scheming going on (see e.g. sample illustration https://imgur.com/a/0AOb67G from an interlude about 50min in where there are 3 groups of people who mutually distrust each other, each know a secret collection of 3-4 integers unique to their group, and want to deniably pass share integers with each other which are "not my team's". Another participant watches what happened and realizes in retrospect this is how the info was shared.)
We need a ManningCast version of the show. For those unaware, ManningCast is a show following an NFL game with special guests and nontraditional commentary and analysis. Think of it kind of like having the Mannings in the living room while watching an NFL game.
In my hypothetical version of "Are you the one?", the math nerds would be giving commentary and explaining the math behind how they'll solve "Are you the one?" while also hilariously explaining how foolish the contestants' theories are.
Just Ask them to describe Shannon Entropy. If they start talking about information they are out, if they start talking about their crazy cousin they are in.
> No-one is going to watch a full season of a handful of math nerds take a few minutes to progress a worksheet towards a solution each week coupled with whatever they do to pass the time otherwise.
Um, what about those of us who watch Blood on the Clocktower streams?
That's because the real game is occurring both before and after the show in modern reality tv competitions. The goal is to be entertaining and get social media followers and potential invites to further reality tv shows.
A thing to note - the contestants are not allowed to have even pen and paper, as mentioned in the other blogpost. So they need to do these computations in their heads.
> Season 8: In this season, they introduced gender fluidity. Whilst an interesting problem on its own, this would have wreaked havoc on my model.
Well I guess free money except for that one. In that one, one of the contestants, Danny, did the math for optimizing their remaining Truth Booths and Match Ups to get it down to a 50/50 shot.
> I also pitched this idea to The Pudding, and had a great experience with them nerding out about this subject. Though they didn't take my up on my idea, I left with really great and actionable feedback, and I'm looking forward to my next rejection.
Would've been a great Pudding post imo, but oh well, happy to find this nice devblog instead.
This brings up an area that’s been on the edge of my curiosity for years: how do you combine the knowledge of the experts (contestants) with logic to do better either than either strategy individually?
It’s mostly about how to elicit the information from the contestants. Once you have the probabilities from them, it seems relatively straightforward.
I think you could do some form of Bayesian analysis with the prior being how likely each contestant thought that their available partners were "The One" for each other. Then the truth booth would be used to update your priors.
If the goal is to find the perfect matching in some maximum number of turns or less, it's possible to do even better by using a full game tree that minimises the maximum height of the tree ( = number of turns required), instead of using information/entropy as done here.
Basically, using the entropy produces a game tree that minimises the number of steps needed in expectation -- but that tree could be quite unbalanced, having one or more low-probability leaves (perfect matchings) many turns away from the root. Such a leaf will randomly occur some small fraction of the time, meaning those games will be lost.
For concreteness, a game requiring 6 bits of information to identify the perfect matching will take 6 steps on average, and may sometimes require many more; a minimax tree of height 7 will always be solved in at most 7 steps. So if you're only allowed 7 steps, it's the safer choice.
> For concreteness, a game requiring 6 bits of information to identify the perfect matching will take 6 steps on average, and may sometimes require many more
I'm not following your logic. Consider the setup we actually have:
1. You get to ask a series of yes/no questions. (Ignoring the matchups.)
2. Each question can produce, in expectation, up to one bit of information.
3. In order to achieve this maximum expectation, it is mathematically necessary to use questions that invariably do produce exactly one bit of information, never more or less. If your questions do not have this property, they will in expectation produce less than the maximum amount of information, and your expected number of steps to the solution will increase, which contradicts your stated goal of minimizing that quantity.
You get the minimum number of steps needed in expectation by always using questions with maximum entropy. Yes. But those questions never have any variation in the amount of entropy they produce; a maximum entropy strategy can never take more - or fewer - steps than average.¹
¹ Unless the number of bits required to solve the problem is not an integer. Identifying one of three options requires 1.585 bits; in practice, this means that you'll get it after one question 1/3 of the time and after two questions the other 2/3 of the time. But identifying one of 128 options requires 7 bits, you'll always get it after 7 questions, and you'll never be able to get it after 6 questions. (Assuming you're using a strategy where the expected number of questions needed is 7.)
You're correct but the complexity is in the things you're ignoring for simplicity.
This game is constructed such that the questions you can ask are not arbitrary, so you cannot choose them to always produce one bit of entropy (you need to frame your questions as ten matchups in parallel, using all the contestants exactly once) and the number of bits you need may indeed not be an integer.
Because you can't choose your questions to partition the state space arbitrarily, that affects not just the question you ask today, but also previous days: you want to leave yourself with a partitionable space tomorrow no matter what answers you get today.
In the Guess Who analogy, it's against the rules or at least the spirit to ask "does your character have a name which is alphabetically before Grace?". That would allow a strategy which always divides the state space exactly in two.
> Unless the number of bits required to solve the problem is not an integer.
That is one case where root-to-leaf path lengths can vary, though it's not obvious to me that it exhausts all such cases -- in particular, even if we have "ideal leaves" (numbering a power of 2, and each equally likely), it's not clear that there is always a question we can ask that divides a given node's leaves exactly in half.
The linked post is a very thorough treatment of AYTO and a great read. I really like the "guess who" bit on how to maximize the value of guesses. It's a shame the participants aren't allowed to have pen and paper—it makes optimization a lot trickier! I'm impressed they do as well as they do.
[0]: https://danturkel.com/2023/01/25/math-code-are-you-the-one.h...
Loved your post, really enjoyed getting into the meat of it. I wanted to position mine to a layman, kept asking myself "can I explain this to my Dad?"
I think where the post falls short is the absence of a silver bullet that contestants can use to win the game sooner.
I noticed that for any match up score of X, the following match up would keep exactly X pairs in common. So if they scored 4/10 one week, they would change 6 couples before the next one. Employing that approach alone performed worse than the contestants did in real life, so didn't think it was worth mentioning!
https://www.mcsweeneys.net/articles/the-mastermind-box-cover...
I am thinking about making a website for it when the next season starts.
Also: in Germany at least they have 10 x 10 candidates from the start, but sometimes they add a 11th or even 12th of one gender so that there are double matches (e.g. 1 woman has two man as match and needs to find one of it to succeed). This raises the possible combination quite a bit.
Yes there's a gender fluid season and a season where someone had > 1 match, as well as people leaving part way through the season (apparently perfect matches are interchangeable...). All very interesting spins on the core problem to solve; would be really interested if anyone tries to tackle those seasons.
So a yes might rule out 75% of remaining options (for example) which provides 2 bits of information.
Despite summing to 1, the exact values of P(true) and P(false) are dependent on the options which have previously been discounted. Then those variables get multiplied by the amount of information gained by either answer.
A result which conveys 2 bits of information should occur with 25% expected probability. Because that’s what we mean by “two bits of information.”
The number of bits of information you gained is -log₂ (m/n).
If you ask a question which always eliminates half of the options, you will always gain -log₂ (1/2) = 1 bit of information.
If you go with the dumber approach of taking a moonshot, you can potentially gain more than that, but in expectation you'll gain less.
If your question is a 25-75 split, you have a 25% chance of gaining -log₂ (1/4) = 2 bits, and a 75% chance of gaining -log₂ (3/4) = 0.415 bits. On average, this strategy will gain you (0.25)(2) + (0.75)(0.415) = 0.8113 bits, which is less than 1 bit.
The farther away you get from 50-50, the more bits you can potentially gain in a single question, but - also - the lower the number of bits you expect to gain becomes. You can never do better than an expectation of 1 bit for a trial with 2 outcomes.
(All of this is in the article; see footnote 3 and its associated paragraph.)
The article explicitly calls out the expectational maximum of one bit:
>> You'll also notice that you're never presented with a question that gives you more than 1 expected information, which is backed up by the above graph never going higher than 1.
So it's strange that it then goes on to list an example of a hypothetical (undescribed, since the scenario is impossible) yes/no question with an expected gain of 1.6 bits.
If all you can get is a "true" or "false" you expect, at most, one bit of information.
Say you're trying to guess the number on a 6-sided die that I've rolled. If I wanted to outright tell you the answer, that would be 2.58 bits of information I need to convey. But you're trying to guess it without me telling, so suppose you can ask a yes or no question about the outcome. The maximum of the _expected_ information add is 1 bit. If you ask "was it 4 or greater?", then that is an optimal question, because the expected information gain is min-maxed. That is, the minimum information you can gain is also the maximum: 1 bit. However, suppose you ask "was it a 5?". This is a bad question, because if the answer is no, there are still 5 numbers it could be. Plus, the likelihood of it being 'no' is high: 5/6. However, despite these downsides, it is true that 1/6 times, the answer WILL be yes, and you will gain all 2.58 bits of information in one go. The downside case more than counteracts this and preserves the rules of information theory: the _expected_ information gain is still < 1 bit.
EDIT: D'oh, nevermind. Re-reading the post, it's definitely talking about >1 bit expectations of potential matchings. So I don't know!
A true answer for a potential match is actually a state update for all of the (n-1) edges connecting either contestant, that's 2(n-2) edges that can be updated to be false. Some of these may already be known from previous rounds' matchups but that's still more than a single binary.
If both of these are equally likely, you gain one bit of information, the maximum possible amount. If you already have other information about the situation, you might gain _less_ than one bit on average (because it confirms something you already knew, but doesn't provide any new information), but you can't gain more.
But “Yes” obviously gives me much more than one bit of the information I need to know the answer.
Just to make this unambiguous: If you ask me to guess a number between one and one billion, and by fantastic luck I guess right, your “yes/no” answer obviously gives me more than one bit of information as to the right answer.
That's not what I see.
https://news.ycombinator.com/item?id=46282007 They have an example that calculates the expected information gained by truth booths and all of the top ones are giving more than one bit. How can this be? It is a yes/no question a max of 1 bit should be possible
https://news.ycombinator.com/item?id=46282343 the expected information (which is the sum of the outcome probability times the outcome information for each of the two possible outcomes) is always less than or equal to one.
The specific comment you replied to had one sentence that didn't say "expected" or "average", but the surrounding sentences and comments give context. The part you objected to was also trying to talk about averages, which makes it not false.
Can’t gain more!
The core confusion is this idea that the answer to a yes/no question can’t provide more than one bit of information, no matter what the question or answer. This is false. The question itself can encode multiple bits of potential information and the answer simply verifies them.
"you might gain _less_ than one bit on average [...], but you can't gain more."
On. Average.
That's a true statement. Can't gain more than one bit on average.
One bit, however, is not “the maximum possible amount” you can gain from an oracular answer to a yes/no question. The OP covers exactly this point re: the “Guess Who?” game.
That's why people are talking about the maximum expected value.
If you ask the question "which of A, B, or C is true?" then you're not asking a yes/no question, and it's not surprising that you expect to gain more than 1 bit of information.
The match-ups can however give more information, as it isn't giving a yes/no answer.
Discussing "events" (ie, Truth Booth or Match Up) together muddles the analysis a bit.
I agree with Medea above that a Truth Booth should give at most 1 bit of information.
For a start, the setting is an emotive one. It's not just a numeric game with arbitrary tokens, it's about "the perfect romantic partner." It would take an unusually self-isolating human to not identify who they feel their perfect match should be and bias towards that, subconsciously or consciously. We (nearly) all seek connection.
Then, it's reality TV. Contestants will be chosen for emotional volatility, and relentlessly manipulated to generate drama. No-one is going to watch a full season of a handful of math nerds take a few minutes to progress a worksheet towards a solution each week coupled with whatever they do to pass the time otherwise.
A lot more upbeat than "Alice in Borderland".
In my hypothetical version of "Are you the one?", the math nerds would be giving commentary and explaining the math behind how they'll solve "Are you the one?" while also hilariously explaining how foolish the contestants' theories are.
Um, what about those of us who watch Blood on the Clocktower streams?
Well I guess free money except for that one. In that one, one of the contestants, Danny, did the math for optimizing their remaining Truth Booths and Match Ups to get it down to a 50/50 shot.
Would've been a great Pudding post imo, but oh well, happy to find this nice devblog instead.
My lived childhood is old enough to be someone's "research."
It’s mostly about how to elicit the information from the contestants. Once you have the probabilities from them, it seems relatively straightforward.
Basically, using the entropy produces a game tree that minimises the number of steps needed in expectation -- but that tree could be quite unbalanced, having one or more low-probability leaves (perfect matchings) many turns away from the root. Such a leaf will randomly occur some small fraction of the time, meaning those games will be lost.
For concreteness, a game requiring 6 bits of information to identify the perfect matching will take 6 steps on average, and may sometimes require many more; a minimax tree of height 7 will always be solved in at most 7 steps. So if you're only allowed 7 steps, it's the safer choice.
I'm not following your logic. Consider the setup we actually have:
1. You get to ask a series of yes/no questions. (Ignoring the matchups.)
2. Each question can produce, in expectation, up to one bit of information.
3. In order to achieve this maximum expectation, it is mathematically necessary to use questions that invariably do produce exactly one bit of information, never more or less. If your questions do not have this property, they will in expectation produce less than the maximum amount of information, and your expected number of steps to the solution will increase, which contradicts your stated goal of minimizing that quantity.
You get the minimum number of steps needed in expectation by always using questions with maximum entropy. Yes. But those questions never have any variation in the amount of entropy they produce; a maximum entropy strategy can never take more - or fewer - steps than average.¹
¹ Unless the number of bits required to solve the problem is not an integer. Identifying one of three options requires 1.585 bits; in practice, this means that you'll get it after one question 1/3 of the time and after two questions the other 2/3 of the time. But identifying one of 128 options requires 7 bits, you'll always get it after 7 questions, and you'll never be able to get it after 6 questions. (Assuming you're using a strategy where the expected number of questions needed is 7.)
This game is constructed such that the questions you can ask are not arbitrary, so you cannot choose them to always produce one bit of entropy (you need to frame your questions as ten matchups in parallel, using all the contestants exactly once) and the number of bits you need may indeed not be an integer.
Because you can't choose your questions to partition the state space arbitrarily, that affects not just the question you ask today, but also previous days: you want to leave yourself with a partitionable space tomorrow no matter what answers you get today.
In the Guess Who analogy, it's against the rules or at least the spirit to ask "does your character have a name which is alphabetically before Grace?". That would allow a strategy which always divides the state space exactly in two.
That is one case where root-to-leaf path lengths can vary, though it's not obvious to me that it exhausts all such cases -- in particular, even if we have "ideal leaves" (numbering a power of 2, and each equally likely), it's not clear that there is always a question we can ask that divides a given node's leaves exactly in half.
> 0 0.3679
> 1 0.3679
> 2 0.1839
> 3 0.0613
> 4 0.0153
> 5 0.0031
For 0, it's a well known [1] result 1/e, I remember a puzzle where people left their hat and then pick one randomly.
Looking at the table it looks like the general formula is 1/(e*n!) that is a Possion distribution. Compare with https://en.wikipedia.org/wiki/Poisson_distribution#Examples_...
Anyway, I'm not sure if my observation helps too much to solve the original problem.
[1] at least I know it :)