There's reasons to change and there's difficulty of change.
The worst-case scenario in a codebase is when these two have anything close to a proportional relationship, when the hardest things to change have the most reasons to change.
That covers a lot of SE principles (albeit coarsely and oversimplified) in a nutshell.
Difficulty of Change: Efferent Coupling
Code that's reused by a lot of places (ex: the public interface of a widely-used class, the signature and return of a widely-used function), is always going to be difficult to change, since changing their design breaks everything using their design (rippling/cascading need for changes).
For example, at a basic level, this package (design):

... is going to be difficult to change. This package (design) generally wouldn't be anywhere near as difficult to change:

It sounds to me like a lot of your code fits more in the second kind of diagram, and that kind of code tends to be easier to change without breaking external dependencies.
Reasons to Change: Responsibilities
While the worst-case scenario we want to avoid is for a design that's very difficult to change to also have many reasons to change, just about anything benefits from having fewer reasons to change if possible.
One of the things that increases the number of reasons for something to change is when that something is doing more things.
As a blatant example, if you have a single function (or even class) that is responsible for playing music, calculating physics, and drawing stuff to a screen, then that has a lot of reasons to change. Someone might want to improve the way it's drawing stuff to a screen. Another might want to correct a minor glitch in the physics calculation. Another might want the music side of the functionality to handle a new audio format. Some of these might want to change not just the implementation of the function, but the design of the function (and thus break all the code using it).
As a result, it's worth splitting up these concerns into more functions. Each function then tackling a single responsibility will have fewer reasons to change (and also generally be easier to change since only the parts of your code that need audio will use the function that plays music, e.g., and for other reasons mentioned below).
As some point, perhaps your application still ultimately wants to play music, calculate physics, and draw stuff to the screen. But it doesn't have to do this all in one function, let alone one class, let alone one module. You might be able to push these disparate needs all the way towards the main entry point of your application, which just constructs a GUI which calls various functions to play music, calculate physics, and draw stuff to the screen. That's a lot easier to manage than having some function (let alone class) at the core of your system trying to do all this at once.
Another way to look at this is that some part of your codebase will always find reasons to change, provided that your software is evolving and faces new demands and needs and finds new versions to release. The best possible scenario is when the code that needs to change the most is the easiest to change. Often the code that is easiest to change will be simple code, not widely-used, that's just using a bunch of stuff that don't need to change.
Difficulty of Change: Complexity
Typically the basic instability metric covering reasons to change/difficulty of change only tackles coupling between designs and abstracting away details, but I want to get into another broad reason that makes changes difficult even at the implementation level: complexity.
If you have a five hundred line function, it's generally going to be difficult to change in addition to often having more reasons to change (because it's doing so much: something that does more has more reasons to change as covered above).
It's more difficult to change when it has a reason to change because it's complex. It's hard to digest 500 lines of code all at once trying to figure out exactly how the function works in order to make meaningful and valid changes to it. That also makes it difficult to debug as well as even test thoroughly.
So you can also end up with that nightmarish scenario of something that is very difficult to change still having reasons to change if it's complex. The complexity will tend to invite changes, while also making them more difficult to apply.
Reasons to Change: Unreliability
Let's say you had to implement something inevitably complex just at the conceptual level. It involved digesting a 40-page cutting-edge paper on a groundbreaking algorithm riddled with complex mathematics and algorithms and data structures.
After weeks, you finally get what looks like a reasonable implementation. Phew! You give it some basic manual testing from within the application, and it seems to do what it's supposed to do. Yay!
Except your software goes into alpha and, after being exposed to a number of users, they find various edge cases your code didn't handle. Argh!!! Now you have to go back and revisit your original code as well as that complex paper that gave you nightmares in the first place, trying to figure out what's wrong and how to properly solve the issue.
Here even if you organize your functions and classes neatly into designs that have few reasons to change (tackle a singular, clear responsibility), the unreliability of your code (the edge cases you didn't spot in advance) can provide a reason to change (a bug reason). And the difficulty of change can still be tremendous, even if you only need to add a dozen lines of code to correct the issue.
The solution to this is to start testing your code more exhaustively in advance, including automated unit and integration testing which runs the tests you've established each time you commit a change (continuous integration). Problems spotted sooner tend to be less costly to fix, as well as improving your sanity.
Reasons to Change: Details
Designs that mention more details tend to have more reasons to change. If the design of a robot offers a function that tells it to walk to new places, you might write a lot of code to tell it to walk to various rooms.
But what happens when you encounter a new robot design that can allow it to instantly teleport to a room with a molecular teleporter? Now we're screwed two ways. First, we're screwed because all that code telling the robot to walk to a room now has to change to fit the new robot's design which can teleport to a room. Likewise, your new robot can't use the previous design because it can't walk to a room.
The problem here is that the design is mentioning too many details. Walking implies that the robot has legs. All we really needed was an abstract design that can tell a robot to go to a room by whatever means possible. That abstract design would have been compatible with any means of transportation for any model of robot.
So again, a design that mentions lots of concrete details like this has more reasons to change (something very undesirable when we can avoid it). This then takes us into concepts like dependency injection and the dependency inversion principle which might seem a little advanced at this stage (at least the latter). However, a key idea to take out of this is that you want your designs to mention fewer concrete details when they can avoid it. Go to a room, not walk to a room, not fly to a room. By doing that, you shape your code using such a design to be compatible with more underlying changes, and therefore find fewer reasons for such coding using the design to change because there will be fewer reasons for the design itself to change.
Conclusion
So anyway, I hope this little simplified introduction helps a bit. If your designs are not being reused by many places, then that mitigates at least the
difficulty of change part.
But if you take that as an excuse to design monolithic functions or classes, then you are still making maintenance more difficult than it needs to be by increasing both the difficulty of change at another level as well as the reasons to change.
10 smaller, simpler functions that each do one thing can be a lot easier to change, as well as having fewer reasons to change, than one combined huge function. In that case, if we had a reason to change, it might be for one small, simple function, and thus also make the change easier to apply to that one little function.
So software engineering principles don't really change much between reusable code that's used a lot, and between code that's not being used a lot. It's always worth keeping in mind the dynamics that increases difficulty of change and reasons to change. We want to mitigate both.
The practical difference, perhaps, is that the code which is the absolute hardest to change (ex: code being reused all over the place, but is very complex at a basic conceptual level) should seek even harder to find fewer and fewer reasons to change (improving reliability through testing, having fewer responsibilities, abstracting away details, etc. -- the whole shebang).
Yet that doesn't mean you should drop your guard and start ignoring all these engineering principles even for the code that isn't being used by many places.