Whether a discussion is executed effectively depends on the completion level of the question-and-answer segments (Q&A segments) generated during the discussion. Relevance of answers could be used as a clue for evaluating the Q&A segments’ completion degree. In this study, we argue that discussion participants’ heart rate (HR) and its variability (HRV), which have recently received increased attention for being a crucial indicator in cognitive task performance evaluation, can be used to predict participants’ answer-relevance in Q&A segments of discussions. To validate our argument, we propose an intelligent system that acquires and visualizes the HR data with the help of a non-invasive device, e.g. an Apple Watch, for measuring and recording the HR data of participants which is being updated in real-time. We also developed a web-based human-scoring method for evaluating answer-relevance of Q&A segments and question-difficulty level. A total of 17 real lab-seminar-style discussion experiments were conducted, during which the Q&A segments and the HR of participants were recorded using our proposed system. We then experimented with three machine-learning classifiers, i.e. logistic regression, support vector machine, and random forest, to predict answer-relevance of Q&A segments using the extracted HR and HRV features. Area Under the ROC Curve (AUC) was used to evaluate classifier accuracy using leave-one-student-out cross validation. We achieved an AUC= 0.76 for logistic regression classifier, an AUC=0.77 for SVM classifier, and an AUC=0.79 for random forest classifier. We examined possibilities of using participants’ HR data to predict their answer-statements’ relevance in Q&A segments of discussions, which provides evidence of the potential utility of the presented tools in scaling-up analysis of this type to a large number of subjects and in implementing these tools to evaluate and improve discussion outcomes in higher education environment.