My advice would be to focus on understanding the ideas very deeply. A lot of people focus too heavily on the mechanical details of an algorithm without sufficiently understanding the thought process behind it. Challenge yourself to think deeply and really understand why an algorithm is designed the way it is, whether it can be generalized to solve related problems, and what its limitations, edge cases, and degenerate cases are. In my experience, people usually learn two things when it comes to a particular technique:
1. The mechanical details. For example, how to compute an integral, or the steps needed to insert a value into a max-heap.
2. The way to apply a technique to real-world situations. For example, how to use integrals as a useful tool for performing calculations, or how to use heaps to solve problems where having a priority queue is useful.
Once people have learned those two things, they often think they know everything there is to know about the concept. "I know what a heap is, how to code one, and how to apply one to solve problems. What else could there be to know?"
Try to go beyond that and look for a third level of understanding. Ask "how did someone think of this idea?". Find a plausible sequence of thoughts in your mind that leads to you rediscovering the idea. It doesn't have to be historically accurate; that is, it doesn't have to be the same thought process that led the inventor of the algorithm to originally discover the idea. You have the advantage of hindsight to guide you. Find a sequence of thoughts that makes sense for you, and make the discovery yours.
For example, let's say you're looking to understand self-balancing search trees. Think about the problem at hand: to build a dynamic collection that supports fast insert, remove, and search operations. Suppose you've already invented the concept of a linked list. You might get the idea that, to enable faster operations, you can maintain more than the tail and head pointers into the linked list -- you could maintain K pointers that are spread evenly throughout the linked list. If you tune K to be sqrt(n) , you can achieve O(sqrt(n)) insert , delete, and find. This is still a pretty flat structure though, and you get the idea to do this hierarchically. You find that when doing inserts or deletes, it's difficult to maintain the exact same number of children for each parent and still get a good runtime, so you allow that number to vary between, say, 2 and 4. Congratulations, you've invented something similar to 2-3-4 trees.
When you "adopt" an idea like that, taking ownership of it into your own mind, you gain the power to modify it at will to suit your needs. The path that led you to an idea can often be continued, or backtracked upon and forked in another direction, to find a related idea for solving a related problem. You see all at once the current idea's strengths and weaknesses, where it can be generalized, where it cannot succeed, why it is done the way it's done and not some other way, what alternative ways it could have been done -- and all that can inform the search.
Perhaps more importantly, as you practice the thought process of invention, you train your mind to be better at inventing.