When doing everything with functions via the lambda calculus, you can find that a tricky part is dealing with bound variables. Combinators dramatically side-step this issue by not having them. They still do everything with functions, but there are no lambda terms by which functions could be defined with rules. Instead, all functions are composed from other functions.

What you end up with is a stupendously simple model of computation using a syntax which is a strict subset of the lambda calculus. Here's one such model:

A combinator is either:

1) The symbol K
2) The symbol S
3) An application, written simply as "f x", where f and x are combinators.

Application is left associative, so you can write "f x y" to mean "(f x) y".

There are then two equations:

1) K x y = x
2) S f g x = (f x) (g x)

By applying these equations recursively in a rewrite system, you have a Turing complete model of computation. Any lambda term can be turned into a combinator, and if the lambda term normalizes using the rules of lambda calculus, so does the combinator normalize by rules 1 and 2.

For some examples, when you turn the lambda terms for true and false into combinators, you get:

I = S K K
True = S (K K) I
False = K I

And a naive translation of negation gives this monstrosity:

S (S (S (K (S (S (K S) (S (S (K S) (S (K K) (K S))) (S (S (K S) (S (S (K S) (S (K K) (K S))) (S (S (K S) (S (K K) (K K))) True))) (S (S (K S) (S (K K) (K K))) False)))) (S (K K) False))) I) (K False)) (K True)
Reassuringly, if you apply this to True and then run the rewrite rules, you arrive back at K I = False. And if you apply it to False, you arrive back at S (K K) I = True.

It's sometimes said that combinators are the machine code of the lambda calculus. This is partly a slight, pointing out that combinators are near unreadable and near unwritable by hand, and better written by a machine translation of something nice like lambda calculus. And, in fact, functional language compilers will often use combinators as an intermediate representation.