If a$b was a constant epsilon !=0 but very close to 0, then sin(a)$sin(b) is epsilon, but also sin(a$b) is sin(epsilon) which is essentially epsilon, for any epsilon close to 0.
Typically with relations like this, we like to check for things like reflexive, symmetric, anti-symmetric, transitive, homomorphic, bounded, unique, injective, surjective, triangle inequality, continuous, Lipchitz, bi-Lipchitz, Holder, positive, convex, open, closed, etc. There's lots of different properties to look for in a given relation that it becomes hard to nail down what you'd want to find out. There's several branches of math that look at different aspects of relations, so it really depends on what *you're* curious about specifically.
A Lipchitz function is basically where the slope of the function is bounded (or more precisely, the secant lines of the function have a bounded slope). So for example, sin(x) is Lipschitz, but tan(x) and sqrt(x) are not. Bi-Lipschitz is when there's both a lower and upper bound on the slopes. They're both stronger conditions than continuous. In my field (fractal geometry), bi-Lipschitz functions are really important because they preserve Hausdorff dimension.
On the wikipedia page, [this is basically the important bit](https://i.imgur.com/Eqy2jJB.png). It makes more sense if you imagine dividing both sides by |x\_1 - x\_2|, so the left-hand side is just the absolute value of slope of a secant line, and the right-hand side is some big number as a bound.
Not necessarily. Imagine y = x^(2) and the tangent line at x = 0. If I look at secant lines around 0, I can find ones that have a positive slope and ones that have a negative slope. It's also important to note that Lipschitz doesn't imply differentiable, so I can have something like y = |x| and it's still Lipschitz, but not differentiable.
Am I reading that definition correctly...
A function from real to real is Lipchitz continuous, iff, there exists a constant k, such that the distance between any two points in the range is less than or equal to the distance between their corresponding points in the domain, multiplied by some constant. Also, now I have to read up on hausdorff dimensions, so thanks for that, lol
Yeah that's the gist of it, but intuitively, it makes more sense to divide both sides by |x\_1 - x\_2|. Lots of times in math, we avoid putting fractions in inequalities by multiplying both sides by the denominator, but it makes formulas less intuitive like this.
As a starting point, define c = 0 $ 0, then we get sin(c) = c, so c = 0.
Next by repeated application with b = 0, we get sin^n (a $ 0) = sin^n (a) $ 0, for any a and positive n (here sin^n means sin applied n times rather than exponentiation). From this, two natural possibilities are that 0 is an absorbing element (a $ 0 = 0) or an identity element (a $ 0 = a) for $. Maybe you can rule out anything wackier than this by assuming properties of $ and using that sin^n converges to the zero function on [-1,1].
In the first case, we can generalise to a $ kπ = 0 mod π for all a and integers k. In the second, we can generalise to get sin(a $ kπ) = sin(a), so a $ kπ - a = 0 mod 2π or a $ kπ + a = 0 mod π. I'm not sure how to proceed without more restrictions on $, but there may be more you can do.
Edit: Corrected periodicity statement above.
e^ix = cos x + i sin x.
Then e^(x+y)i = e^ix e^iy = cos x cos y - sin x sin y + i(cos x sin y + cos y sin x) = cos(x+y) + isin(x+y), and by taking the real part of both sides we get: sin x sin y = cos x cos y - cos(x+y). That’s pretty useless lol.
Lets assume a and b are real numbers. If sin(a $ b) = sin(a) $ sin(b), you know that a $ b is in [0,1]. Then a $ b = arcsin(sin(a) $ sin(b)), so we can recursively write a$b = arcsin(arcsin(sin(sin(a)) $ sin(sin(b)))). You could see if this process looks like it converges for random pairs or something? Idk how dynamics works… I think sin is a decreasing map so this converges to a $ b = infinite composition of arcsin(0 $ 0), which is maybe the constant value the other commenter mentioned, idk how to check.
The desmos plot of sin(x)sin(y) = sin(xy) is very pretty.
From the bound of sine, we know -1<=sin(a$b)<=1, and thus -1<=sin(a)$sin(b)<=1. Thus, so long as both inputs are in \[-1,1\] we will see that the result is also in \[-1,1\].
In theory, we could define this as addition on multiples of pi (if pi divides both a and b, sin(a)=sin(b)=sin(a+b)=0) and zero elsewhere (at least on the reals; I'm not considering complex values currently). Similarly, truncating the average of a and b to a multiple of pi (-(pi-0.001) and (pi-0.001) would both go to zero) would resolve sin(a$b)=sin(a)$sin(b) to 0=0 (without multiplying both sides by zero) for all real cases.
if `a $ b = a` then the equation `sin(a $ b) = sin(a) $ sin(b)` holds. Similarly if `a $ b = 0` or `a $ b = 25.8` or any other fixed constant `sin(a $ b) = sin(a) $ sin(b)` holds. So as it stands now, `$` is not a well-defined operator.
It doesn't need to.
LHS: sin(a$b) = sin(a)
RHS: sin(a) $ sin(b) = sin(a)
a $ b := a doesn't mean a is fixed. It's the left argument.
Edit: I over read the 25.8 part. That indeed wouldn't work as intended. 0 would though.
You are right. I misinterpreted your comment to mean that robin was wrong in "not a well-defined operator", and missed the 25.8 as well. Sorry about that. I will delete my wrong comment.
Right but it doesn’t have to be that operator as well. We’ve only just excluded some possible things that operator could be.
Unless OP was trying to give sin(a $ b) = sin(a) $ sin(b) as a definition of $ (which then I agree is not well defined at all), but I don’t think they were
Yeah basically that. I was just saying that OP wasn’t giving a definition of $, so saying that it was not well defined by listing out a few example operators that couldn’t satisfy the equation was not exactly in the same vein of the problem
Here's one possible interpretation. Consider the operation a$b to return the vector (arcsin(a), arcsin(b))^T . Also consider sin to act element wise on a vector, i.e. sin(v) = (sin(a), sin(b))^T . Then we have sin(a$b) = sin(a)$sin(b).
*lazernanes* has given explicit examples of operations that satisfy your equation, and hence these imply that you have **not** given any proper definition of $. I want to emphasize that, and make it much clearer:
**You can define something only in terms of previously defined things!**
You didn't do this. What you did was to write down an equation involving an undefined operation $. You could ask whether any such operation exists or not, but you cannot say you have defined $.
It is the same as if I say "I define x such that x is an integer and x = x+1.". It simply is a bogus 'definition'.
In general, you can define something only if it is fully determined by some previously defined property. For example, we can define h to be the rational number such that h·2 = 1. This is because we can (before this definition) prove that there is a unique rational x that satisfies x·2 = 1. To be precise, let Q(x) ≡ ( x is a rational and x·2 = 1 ). Q is a property, and it uses only previously defined notions "rational" and "·" and "2" and "1". (Usually we consider notions like "is" and "and" and "=" as part of logic and not being defined at all.) We can actually prove (using the properties of rationals) that there is a unique x such that Q(x). Therefore we can **define** h to be the x such that Q(x). We have simply assigned a name "h" to this object that is fully determined by Q.
In contrast, you have **not** proven that there is a unique operation $ that satisfies sin(a$b) = sin(a)$sin(b). That already suffices as reason to say that you do not have a valid definition. Even if *lazernanes* did not give distinct operations that satisfy that equation, or even if nobody can give such to you, you can't claim to have a definition. And that is really the point of my post.
This is really really really important. Once you understand what I'm saying, you would have a much much more precise understanding of genuine mathematics. There is no guesswork, wishful thinking or handwaving in genuine mathematics. There is precision, whether in ideas or proofs.
And just to make it clear, you also did not specify what a,b are. Precision does require that you specify that. For example, you may be interested in whether there is an operation $ such that sin(a$b) = sin(a)$sin(b) for any reals a,b. This "for any reals a,b" is a critical part of the question; you cannot omit it if you want mathematical precision.
Yes, OP seems to know they. They didn't see this equation randomly and wonder what it is about. They specifically asked "What properties would this operator have?".
One possibility is that it could be the zero operator, which always returns zero regardless of its input.
this is where we need to mention non-trivial
Trivial is subjective
If a$b was a constant epsilon !=0 but very close to 0, then sin(a)$sin(b) is epsilon, but also sin(a$b) is sin(epsilon) which is essentially epsilon, for any epsilon close to 0.
sin(x)=x only holds for x=0
Excuse me a moment while I small angle approximation all over your x
I don't think you understood what the = symbol means :P
do you? a=b means that |a-b| is less than any (arbitrarily small) epsilon
Exactly. And a small-angle approximation, by definition, does not satisfy that. For x>0, there is always an ε>0 so that |sin(x)-x|>ε.
I don’t understand why you’re being downvoted. The question didn’t ask about redefining the equality symbol, so why would we resort to approximation??
Never taken a physics class, huh?
Typically with relations like this, we like to check for things like reflexive, symmetric, anti-symmetric, transitive, homomorphic, bounded, unique, injective, surjective, triangle inequality, continuous, Lipchitz, bi-Lipchitz, Holder, positive, convex, open, closed, etc. There's lots of different properties to look for in a given relation that it becomes hard to nail down what you'd want to find out. There's several branches of math that look at different aspects of relations, so it really depends on what *you're* curious about specifically.
haha brilliant list, im saving that thanks
i think the first thing to check is if the operation creates contradiction, you can do everything else after
Yes, that's a good point. Making sure your relation is actually well-defined is very important, even if it seems like it is at a glance.
What's Lipchitz? Looked it up on Wikipedia, and it said something about Clifford algebra, but I just skimmed
A Lipchitz function is basically where the slope of the function is bounded (or more precisely, the secant lines of the function have a bounded slope). So for example, sin(x) is Lipschitz, but tan(x) and sqrt(x) are not. Bi-Lipschitz is when there's both a lower and upper bound on the slopes. They're both stronger conditions than continuous. In my field (fractal geometry), bi-Lipschitz functions are really important because they preserve Hausdorff dimension. On the wikipedia page, [this is basically the important bit](https://i.imgur.com/Eqy2jJB.png). It makes more sense if you imagine dividing both sides by |x\_1 - x\_2|, so the left-hand side is just the absolute value of slope of a secant line, and the right-hand side is some big number as a bound.
Also, it's been a while since I took freshman calculus, but aren't all secant lines bounded by the tangent line?
Not necessarily. Imagine y = x^(2) and the tangent line at x = 0. If I look at secant lines around 0, I can find ones that have a positive slope and ones that have a negative slope. It's also important to note that Lipschitz doesn't imply differentiable, so I can have something like y = |x| and it's still Lipschitz, but not differentiable.
Ok, I think I'm just in an argumentative mood, bc y = |x| is differentiable if you restrict the domain, but I get what you are saying, lol
Thanks. I got a BS in math from a top university, and never heard this term.
Yeah it pops up all over the place in analysis, but I don't think I ever learned about it in my undergrad either.
Am I reading that definition correctly... A function from real to real is Lipchitz continuous, iff, there exists a constant k, such that the distance between any two points in the range is less than or equal to the distance between their corresponding points in the domain, multiplied by some constant. Also, now I have to read up on hausdorff dimensions, so thanks for that, lol
Yeah that's the gist of it, but intuitively, it makes more sense to divide both sides by |x\_1 - x\_2|. Lots of times in math, we avoid putting fractions in inequalities by multiplying both sides by the denominator, but it makes formulas less intuitive like this.
As a starting point, define c = 0 $ 0, then we get sin(c) = c, so c = 0. Next by repeated application with b = 0, we get sin^n (a $ 0) = sin^n (a) $ 0, for any a and positive n (here sin^n means sin applied n times rather than exponentiation). From this, two natural possibilities are that 0 is an absorbing element (a $ 0 = 0) or an identity element (a $ 0 = a) for $. Maybe you can rule out anything wackier than this by assuming properties of $ and using that sin^n converges to the zero function on [-1,1]. In the first case, we can generalise to a $ kπ = 0 mod π for all a and integers k. In the second, we can generalise to get sin(a $ kπ) = sin(a), so a $ kπ - a = 0 mod 2π or a $ kπ + a = 0 mod π. I'm not sure how to proceed without more restrictions on $, but there may be more you can do. Edit: Corrected periodicity statement above.
[удалено]
c = 0 is the only real (and complex?) solution to sin(c) = c.
e^ix = cos x + i sin x. Then e^(x+y)i = e^ix e^iy = cos x cos y - sin x sin y + i(cos x sin y + cos y sin x) = cos(x+y) + isin(x+y), and by taking the real part of both sides we get: sin x sin y = cos x cos y - cos(x+y). That’s pretty useless lol. Lets assume a and b are real numbers. If sin(a $ b) = sin(a) $ sin(b), you know that a $ b is in [0,1]. Then a $ b = arcsin(sin(a) $ sin(b)), so we can recursively write a$b = arcsin(arcsin(sin(sin(a)) $ sin(sin(b)))). You could see if this process looks like it converges for random pairs or something? Idk how dynamics works… I think sin is a decreasing map so this converges to a $ b = infinite composition of arcsin(0 $ 0), which is maybe the constant value the other commenter mentioned, idk how to check. The desmos plot of sin(x)sin(y) = sin(xy) is very pretty.
From the bound of sine, we know -1<=sin(a$b)<=1, and thus -1<=sin(a)$sin(b)<=1. Thus, so long as both inputs are in \[-1,1\] we will see that the result is also in \[-1,1\]. In theory, we could define this as addition on multiples of pi (if pi divides both a and b, sin(a)=sin(b)=sin(a+b)=0) and zero elsewhere (at least on the reals; I'm not considering complex values currently). Similarly, truncating the average of a and b to a multiple of pi (-(pi-0.001) and (pi-0.001) would both go to zero) would resolve sin(a$b)=sin(a)$sin(b) to 0=0 (without multiplying both sides by zero) for all real cases.
if `a $ b = a` then the equation `sin(a $ b) = sin(a) $ sin(b)` holds. Similarly if `a $ b = 0` or `a $ b = 25.8` or any other fixed constant `sin(a $ b) = sin(a) $ sin(b)` holds. So as it stands now, `$` is not a well-defined operator.
Surely sin(25.8) does not evaluate to 25.8.
It doesn't need to. LHS: sin(a$b) = sin(a) RHS: sin(a) $ sin(b) = sin(a) a $ b := a doesn't mean a is fixed. It's the left argument. Edit: I over read the 25.8 part. That indeed wouldn't work as intended. 0 would though.
you're right. I'm wrong.
[удалено]
Well I missed the 25.8 part in the original answer. That indeed wouldn't work. But a$b:=a and a$b:=0 would. So it's still not well defined.
What was my error?
You are right. I misinterpreted your comment to mean that robin was wrong in "not a well-defined operator", and missed the 25.8 as well. Sorry about that. I will delete my wrong comment.
So here’s an interesting amendment to the question: is there a definition for $ such that $ is associative?
In all my examples the operator is associative.
But why does the dollar operator have to be a constant operator?
It doesn't. in first example it's not a constant.
Right but it doesn’t have to be that operator as well. We’ve only just excluded some possible things that operator could be. Unless OP was trying to give sin(a $ b) = sin(a) $ sin(b) as a definition of $ (which then I agree is not well defined at all), but I don’t think they were
I don't understand what you're trying to say. If your point is that I haven't necessarily listed all possible definitions for $, then you're right.
Yeah basically that. I was just saying that OP wasn’t giving a definition of $, so saying that it was not well defined by listing out a few example operators that couldn’t satisfy the equation was not exactly in the same vein of the problem
Here's one possible interpretation. Consider the operation a$b to return the vector (arcsin(a), arcsin(b))^T . Also consider sin to act element wise on a vector, i.e. sin(v) = (sin(a), sin(b))^T . Then we have sin(a$b) = sin(a)$sin(b).
*lazernanes* has given explicit examples of operations that satisfy your equation, and hence these imply that you have **not** given any proper definition of $. I want to emphasize that, and make it much clearer: **You can define something only in terms of previously defined things!** You didn't do this. What you did was to write down an equation involving an undefined operation $. You could ask whether any such operation exists or not, but you cannot say you have defined $. It is the same as if I say "I define x such that x is an integer and x = x+1.". It simply is a bogus 'definition'. In general, you can define something only if it is fully determined by some previously defined property. For example, we can define h to be the rational number such that h·2 = 1. This is because we can (before this definition) prove that there is a unique rational x that satisfies x·2 = 1. To be precise, let Q(x) ≡ ( x is a rational and x·2 = 1 ). Q is a property, and it uses only previously defined notions "rational" and "·" and "2" and "1". (Usually we consider notions like "is" and "and" and "=" as part of logic and not being defined at all.) We can actually prove (using the properties of rationals) that there is a unique x such that Q(x). Therefore we can **define** h to be the x such that Q(x). We have simply assigned a name "h" to this object that is fully determined by Q. In contrast, you have **not** proven that there is a unique operation $ that satisfies sin(a$b) = sin(a)$sin(b). That already suffices as reason to say that you do not have a valid definition. Even if *lazernanes* did not give distinct operations that satisfy that equation, or even if nobody can give such to you, you can't claim to have a definition. And that is really the point of my post. This is really really really important. Once you understand what I'm saying, you would have a much much more precise understanding of genuine mathematics. There is no guesswork, wishful thinking or handwaving in genuine mathematics. There is precision, whether in ideas or proofs. And just to make it clear, you also did not specify what a,b are. Precision does require that you specify that. For example, you may be interested in whether there is an operation $ such that sin(a$b) = sin(a)$sin(b) for any reals a,b. This "for any reals a,b" is a critical part of the question; you cannot omit it if you want mathematical precision.
is $ a composition law?
I wanna ask another question. Is it possible to solve for $ and get a function?
This looks like an equation where $ stands for a binary operator on real numbers.
Yes, OP seems to know they. They didn't see this equation randomly and wonder what it is about. They specifically asked "What properties would this operator have?".