Hypothesis space in AdaBoost or general Machine learning


I was curious about the following: in most learning algorithms, when an algorithm is said to learn a concept class $ C$ then the algorithm outputs a function from the hypothesis space $ H$ and often the complexity of the learning algorithm is stated in terms of the VC dimension of $ H$ . I was wondering, what is an assumption on $ H$ , for example, does it include all possible functions or does it only include $ C$ , is there any reference which talks about the properties of function in $ H$ that I can assume? Say, for every $ c\in C$ , does there always exist an $ h\in H$ for which $ Pr_{x} [h(x)=c(x)]\geq 1-\varepsilon$ for every $ \varepsilon>0$ . Is this fair to assume?