ZFS pool is a collection of disk devices that are grouped together to allow crafting filesystems from it. In that regards it's similar to Linux LVM (Logical Volume Manager). Disks are grouped together in logical devices called vdevs (virtual devices). VDEV determine redundancy level of the pool. A pool can consist of a single vdev or multiple vdevs. vdevs of different layout can be mixed together in one pool, but that goes against best practices and should not be considered. Below are examples of pools layout along with explanation of redundancy level.
For various RAIDZ configurations few things are worth of note. First and most important is that, due to ZFS being able to manage both filesystem and volume management, it offers solution to so called RAID write hole. A write hole can occur when, during the write of data block is interrupted by a system crash (most notably a power failure). It can result in parity data being insufficient for data recovery. Battery powered RAID controllers can mitigate this problem. ZFS takes another approach. It write variable length data stripes and makes this operation atomic. Due to Copy on Write and transactional nature, filesystem is always in consistent state - block will be committed or rolled back.
Pool always consists of vdevs. A single disk pool contains a single vdev that contains a single disk.
A raidz2 is a rough equivalent of RAID6 in traditional RAIDs. It contains two pariti disks, so a single vdev capacity is a sum of all disks capacities minus two disks.
In addition to layouts above other types of media can be used in so called hybrid pools. Additionally to having platter disks in a pool storage administrator can add SSD and DRAM based disks as Cache and ZFS Intent Log devices.
Adding SSD disk as a cache device creates a Layer 2 Cache (thus the name l2arc). It is a layer living between in RAM cache (which is most costly but fastest) and platter disks storage, which is least costly but slowest medium in the pool. In ideal world hottest data live in RAM, warm data live on SSD, cold data sit on disks.
Adding SSD (or better battery powered DRAM devices) as ZFS Intent Log allows for grouping of data blocks to be committed to disks in large groups and flushing them in one large I/O. That not only takes off load from disk writes but also helps preventing pool fragmentation.