# Thread: What's that code snippet?

1. Originally Posted by Jayjay
Originally Posted by Swammerdami

A simple one-liner suffices. No loop is needed.
Finally got it.
Code:
```int lastbit(int y)
{
return ((x-1)^x)&x;
}```
x-1 sets the last bit to 0, and all preceding bits before it to 1. XOR with original x to clear the irrelevant bits to zero. AND with original leaves only the desired bit active. It seems to work also for 0 and negative numbers, which is neat.
Beautiful! Variations are possible, e.g. . . . return x&(~(x-1)); . . . The codes work whether the machine uses 2's-complement arithmetic or 1's-complement.

A programmer showed me this trick 52 years ago and I thought it was wonderful. After all these decades I'm surprised it's not better known, but I guess a large majority of programmers aren't into "bit twiddling" at all.

Big kudos to Jayjay for discovering the trick himself. We'll never know if I could have figured it out myself: the programmer 52 years ago only gave me a few seconds to think of an answer before revealing it himself.

(BTW, the trick is not useless! Sometimes you want to loop one-by-one over the 1-bits in a word: you may as well do it quickly.)

2. float p[n] r[n]
int z = 1

while(z == 1)(
z = 0
for(i = 0, i < n-2, n++){
If(p(i) > p(i + 1){
a = p(i);
p(i) = p(i +1);
p(i + 1) = a;
z = 1;
}
}

for(i = 1, i <= n, i++)r(i-1) = i/n);

plot (x,y) (p,r)

This s a useful bit of code.

3. Depict visually whether some data follows Zipf's Law?

4. Originally Posted by Swammerdami

Depict visually whether some data follows Zipf's Law?
Not directly, it is actually routine analysis. I was not familiar with Zipf's Law but it may be related in some way.

5. Originally Posted by steve_bank
float p[n] r[n]
int z = 1

while(z == 1)(
z = 0
for(i = 0, i < n-2, n++){
If(p(i) > p(i + 1){
a = p(i);
p(i) = p(i +1);
p(i + 1) = a;
z = 1;
}
}

for(i = 1, i <= n, i++)r(i-1) = i/n);

plot (x,y) (p,r)

This s a useful bit of code.
explanation https://talkfreethought.org/showthre...Practical-Math

6. unsigned int ,a, b,c;
a = 820;
b = 200;
b = ~b + 1;
c = a + b;

7. For steve_bank's recent one, I'll assume twos-complement integers. They are universal on chip CPU's, even if not in pre-chip ones.

~b = reverse bits = - b - 1

Thus, ~b + 1 = - b

a = 820;
b = 200;
b = ~b + 1;
(b = - 200)
c = a + b;
(c = 620)

8. c = a -b

Subtraction in the processor is done in 2s complement, an addition instead of binary direct subtraction. 2s comp is doe in hardware and automatically keeps track of the sign.

Numbers are stored as 2s complement. Code converts the negative 2s comp to decimal text. -1 is displayed but the stored value is ffffffff.

in C

int a,b;

a = _ 1
b = -1;
printf(" dec x %d hex %x/n"x,x);
printf(" dec y %d hex %x/n".y,y);

dec x 1 hex 0x1
dec y -1 hex 0xffffffff -- -1 in 2s commonplace

decrementing fro 0 to negative numbers.

It is harder to interpret the 32-64bit floating point numbers by inspection.
ffffffff -1
ffffffe -2

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•