Without some form of compression, I don't think it is.
In particular, consider your example (5 horcruxes with 3 needed to reconstruct). View the original file as the interval (0, N) and view it as a set covering problem. If each horcrux covers an interval of size N/3, then if any pair overlaps, there is no third horcrux that can complete the covering. This is a contradiction because 5 horcruxes of size N/3 must overlap somewhere.
If you want to split a file into 5 horcruxes _and_ you require all 5 horcruxes must be present to reconstitute the original file, then each horcrux will be one fifth the size of the original. However if you allow a subset of horcruxes to reconstitute the file then each horcrux will be as big as the file :)
It would be possible to just make the encrypted file publicly available and only distribute the key shares to each "horcrux". The shares themselves should be the size of the key that is used for encryption.
RAID 6 uses some linear algebra to allow m of n copies to reconstruct the original data, where typically m = n - 2, but the math works for any number you like. So if you split up the data using the exact same algorithm, and pre- or post-encrypt the blocks, you should get the same thing being attempted here, and only inflate storage by n/m