First, the process of replacing a disk that has failed gracefully, i.e. faulted in ZFS, is straightforward. The process is a bit more complex if the disk has decided to die in such a way that it's no longer visible. especially if it's still spun up. Most of this difficulty is just identifying the dead drive.
Second, about having an extra drive connection for resilvers. I'm going to run down some scenarios real quick. These are all assuming SAS drives, SATA it's easier to get an adapter (USB can be used if you're REALLY desperate).
First of all, obviously, adding an extra drive is only temporary while the array rebuilds. Once that's done, you pull the bad drive out and put the new drive in it's place. While the rebuild is happening, don't be afraid to hang a drive out the side for a day or two if you need to, it will be fine.
Also on that note, if you're short on PCIe slots (very likely if you're not using a whole server), consider removing the 10G card if you put one in. 1G for NAS access stinks, but if you can swing it for a day or two then that's something to consider. I don't think I'd buy a bigger card just because I have a failed drive, but given how cheap most HBAs are I'd for sure consider a bigger one than you need if you can swing it.
If you have spare external slots, you can get adapters that go straight from the external slot to a SAS connection, with a power connection hanging off the back of that. This is great if you, like me, have a totally full external shelf. You should always have a spare external port, but I've not actually tried chaining a passive external adapter through the shelf. Thankfully I just use the single data connection for my shelf, so there's a free slot on the card right there.
Again, you can rebuild just fine by removing the old drive and replacing it, but if you can swing an extra port, you should.