Assessing Teacher Digital Competence: the Construction of an Instrument for Measuring the Knowledge of Pre-Service Teachers
La evaluación de la competencia digital docente: construcción de un instrumento para medir los conocimientos de futuros docentes
Articulo main
Abstract
Assessing competences always poses a challenge and can be even more complicated when tackling a multidimensional competence like teacher digital competence (TDC). TDC is understood to consist of different dimensions linked to its components. This complexity gives rise to the need to organize and systematize both TDC training and its evaluation through a standard based on validated benchmark indicators. Designing and developing an instrument for TDC assessment has been a two-phase process. The COMDID-A self-assessment tool was developed in the first phase and COMDID-C, an instrument for assessing knowledge related to TDC, in the second. In this article we present the process of constructing the COMDID-C instrument. For this first stage, we worked with two samples, an expert validation and a pilot test sample. Due to the complexity of the test, we conducted a preliminary evaluation of the validity of its content, construction and reliability. Our results indicate that the test is well designed and consistent with its intended purpose. The next step will be administering the test to a larger sample that will allow the instrument to be externally validated.
La evaluación de competencias siempre supone un reto y más si tenemos que abordar una competencia multidimensional como la competencia digital docente (CDD). La CDD la entendemos organizada en diferentes dimensiones vinculadas a sus componentes. De esta complejidad se deriva la necesidad de organizar y sistematizar tanto la formación en CDD como su la evaluación mediante un estándar basado en indicadores de referencia que estén contrastados. Diseñar y desarrollar un instrumento para la evaluación de la CDD ha sido un proceso en dos fases. En la primera, se ha desarrollado un instrumento de autoevaluación COMDID-A y en la segunda COMDID-C, un instrumento para la evaluación de conocimientos vinculados a la CDD. En este artículo presentamos la construcción de COMDID-C. Para esta primera fase se trabaja con dos muestras, una para la validación de expertos y otra para el test piloto. Debido a la complejidad de la prueba se realiza un primer cálculo de la validez de contenido, constructo y fiabilidad. Los resultados indican que es una prueba que está bien diseñada y es coherente con el objetivo que se propone. El siguiente paso será la aplicación de la herramienta en una muestra mayor que permita validar externamente el instrumento.
EVALUACIÓN DEL DOCENTE;COMPETENCIA DIGITAL DOCENTE; TECNOLOGÍA DIGITAL; EVALUACIÓN DE COMPETENCIAS; FORMACIÓN DE DOCENTES
INTRODUCTION
The importance of training teachers in the digital environment and its close relationship with the quality of education in the 21st century has been made clear in the content of different reports published by international institutions. The European Commission (2007 and 2018) defines DC as one of the nine key competences citizens need in order to participate in today’s society. Specifically, the European Commission (2018, p. 5) specifies that “Digital competence involves the confident, critical and responsible use of, and engagement with, digital technologies for learning, at work, and for participation in society.” On this basis, we believe that today’s teachers should be able to train citizens to use digital technologies (DT) as a natural part of their daily lives.
In accordance with the objectives of the Education and Training 2020 Strategic Framework (European Union, 2009), teachers must link digital-age skills or competences to their professional practice. Various international and national frameworks of reference have been developed in this regard which define the digital competence of teachers, which we will refer to as teacher digital competence (TDC), and its components. Table 1 contains a summary of the main frameworks of reference that have been used most recently, which we will discuss below.
Model Framework |
Institution |
Reference |
Areas or dimensions of TDC |
ICT standards for FID |
Ministry of Education, Chile |
Enlaces (2008) |
Pedagogical, technical, school management, social, ethical and legal aspects of development. |
NETS-T |
ISTE |
ISTE (2008) |
Learning and creativity of the students, learning and evaluation experiences, work, citizenship and professional growth. |
Teachers ICT competence standards |
UNESCO |
Unesco (2008) |
Policy and vision, curriculum and evaluation, pedagogy, ICT, organization and administration, professional teacher training. |
Teachers ICT competencies |
Ministry of Education, Chile |
Enlaces (2011) |
Pedagogical, technical, management, social, ethical and legal, and professional development. |
DigiLit Leicester |
Leicester City Council |
Fraser, Atkins & Richard (2013) |
Search, evaluation and organization, create and share, evaluation and feedback, communication, collaboration, and participation, security, identity, development. |
ICT competences for professional teacher development |
Ministry of National Education, Colombia |
Ministerio Educación Nacional (2013) |
Technological, communicative, pedagogical, management and research. |
Common Framework for TDC |
Ministry of Education, Government of Spain |
Information, communication, content creation, security, problem solving |
|
TDC Rubric |
ARGET, Universitat Rovira i Virgili |
Lázaro & Gisbert (2015) |
Didactic, curricular and methodological; planning, organization and management of digital technological resources and spaces; relational, ethical and security; personal and professional |
TDC definition |
Generalitat de Catalunya |
Departament d’Ensenyament (2016) |
Design, planning and didactic implementation; management of digital technological resources and spaces; communication and collaboration; ethics and digital citizenship; professional development |
DIGCOMP-EDU |
European Commission |
Redecker & Punie (2017) |
Social and professional commitment; digital resources; digital pedagogy; evaluation and feedback; empowerment of students; facilitate students’ digital competence |
From the perspective of general frameworks of reference, we would like to highlight some aspects of those mentioned above:
- At the international level, the European Commission (Redecker & Punie, 2017, p. 7) has published DigCompEdu, which specifies the digital competences that a teacher must possess in today’s society in order to effectively practice his or her profession. The proposal is designed to encourage reflection so that governments can develop their own frameworks of reference based on a common language and a shared point of departure.
- In Spain, there are two institutional frameworks of note that, in addition to contributing their own definitions of TDC, provide an evaluation rubric based on dimensions or areas, indicators and levels of skill development. The INTEF (2017) defines TDC as the set of competences that 21st century teachers must develop to improve the efficacy of their educational practice and for their own ongoing professional development.
- For the Government of Catalonia (2016, p. 2), TDC is “the ability of teachers to apply and transfer all their knowledge, strategies, skills and attitudes about learning and knowledge technologies (LKT) into real and concrete situations of their professional praxis in order to: a) facilitate student learning and the acquisition of digital competence; b) implement processes of improvement and innovation in teaching in accordance with the needs of the digital age; and c) contribute to their professional development in accordance with the processes of change that occur in society and in educational centers.”
For us, in keeping with that described above, TDC is made up of a set of capacities, abilities and attitudes that the teacher must develop in order to incorporate digital technologies into his or her professional practice and development.
FRAMEWORKS FOR TDC ASSESSMENT
Below we will describe the framework we used as a guide for the creation of the TDC test presented in this article. We will first emphasize the need to have an evaluation rubric that contains precise indicators and levels of development in order to be able to design questions that allow us to accurately measure the level of knowledge in the person being assessed (Carless, Joughin, & Mok, 2006). For this complex purpose, in keeping with Villa and Poblete (2011, p. 150-153), the assessment was constructed in the form of a criterion-referenced test (CRT) based on the following premises:
- The test must make it possible to completely assess the competence, including all of its dimensions.
- The level of development of the competence to be measured must be precisely defined.
- The test must be able to assess a balanced load of declarative, procedural and attitudinal knowledge put into action in professional situations.
- The questions must be aligned with the assessment indicators.
Lázaro and Gisbert (2015) created a rubric for the assessment of TDC in which the competence is structured into dimensions, descriptors and indicators for four levels of development. This proposal is aligned with the documents of the European Commission (Redecker & Punie, 2017, p. 7) and the Government of Catalonia (2016 and 2018), which were published later. As shown in Table 2, an analysis of the content of these frameworks reveals a clear, sometimes even exact, correspondence between the dimensions (called “areas” in the case of DigCompEdu) included in each of them.
COMDID (Lázaro & Gisbert, 2015) |
Generalitat de Cataluña |
DigCompEdu |
D1. Didactic, curricular and methodological aspects |
D1. Design, planning and didactic implementation |
A3. Digital pedagogy |
A4. Evaluation and feedback |
||
A5. Students’ empowerment |
||
A6. Facilitate students’ digital competence |
||
D2. Planning, organization and management of digital technological resources and spaces |
D2. Organization and management of digital technological resources and spaces |
A2. Digital resources |
|
|
|
D3. Relational aspects, ethics and security |
D3. Communication and collaboration |
A1. Professional commitment |
D4. Ethics and digital civism |
A5. Students’ Empowerment |
|
|
A6. Facilitate students’ digital competence |
|
D4. Personal and professional aspects |
D5. Professional development |
A1. Professional commitment |
A thorough analysis of the content of these frameworks in terms of their evaluation indicators shows that there is also a close relationship between their indicators:
- Correspondence between the teacher digital competence (COMDID) proposal and the Government of Catalonia proposal. Of the COMDID’s 22 descriptors, there are two that are not included in the Catalan government’s proposal. These refer to information handling and the creation of knowledge (from a didactic perspective) and to the use of a personal learning environment. In the Government of Catalonia’s proposal, there are two descriptors, both in the professional development dimension, that do not appear in the COMDID. These are related to reflective practice and participation in research. This comparison shows that the degree of coincidence between the two proposals is very high, as COMDID is a broad proposal that is virtually entirely included within the Catalan government’s work.
- Correspondence between COMDID and DigCompEdu. Of the COMDID’s 22 descriptors, five of them were not identified in the European Commission’s document. These refer to the methodological approach of the institution, to learning environments, to the management of spaces with digital technologies at the school, to the participation in projects that involve digital technologies and to personal learning environments. On the other hand, a descriptor referring to reflective praxis has been included in DigCompEdu that does not appear in the COMDID proposal. So, COMDID has proven to be a broader proposal than that of the European Commission.
Based on this content analysis, we believe that the evaluation rubric created by Lázaro and Gisbert (2015) is an adequate instrument for assessing TDC, in line with two governmental proposals: one international (DigCompEdu) and one national (Government of Catalonia). Some TDC assessments have used international frameworks for reference, such as DESECO or DigCompEdu, either exactly as they are laid out in the proposal (Sancho & Padilla, 2016) or in adapted versions (Gutiérrez & Serrano, 2016).
COMDID has been used as an evaluation rubric for TDC in other research and innovative experiments conducted by the ARGET research group (Lázaro, Esteve, Gisbert, & Sanromà, 2016; Silva, Miranda, Gisbert, Morales, & Onetto, 2016) and a version has recently been adapted for the Latin American context (Lázaro, Gisbert, & Silva, 2018). Therefore, we consider that this rubric, in addition to having been validated, has a history that allows us to use it as a point of departure and a reference for the creation of the test.
DESIGN OF THE ASSESSMENT PROCESS
We started from the concept of assessment as a component of the teaching-learning process that must have a formative function, not only for the teacher but also for the student. We see evaluation as a process that should guide students in their learning and develop their capacity to self-regulate that learning (Carless, 2007).
At the time of the assessment, it is important to remember that teachers carry out their work in different areas: the classroom, the educational institution, the community and in the context of their own personal and professional development. The areas we cite should serve as scenarios in which teaching tasks and competences have to be put into action and are therefore settings in which they can be evaluated (Lázaro & Gisbert, 2015, p. 34).
Thus, the assessment of students’ TDC was designed based on the COMDID rubric to be used at two times:
- Final TDC assessment. Evaluation by means of a tool, specifically a CRT that measures students’ knowledge based on the components of TDC. The CRT in this case is used to assess the absolute status of the subject (student) with respect to mastery of a well-defined concept (teacher digital competence) and is useful for classifying students into one of the possible mutually exclusive categories as competent/not competent, in relation to a cut-off point established based on the judgment of experts in TDC. It is important to point out that, unlike the previous assessment, this test is not based on the students’ self-perception of their competence, but rather it objectively measures their capacities in certain situations inherent to the teaching profession.
- After completing the training process, which lasted an entire academic year (12 ECTS credits) and required the students to participate in various activities oriented towards the development of TDC, the assessment process was undertaken, and the results obtained using the evaluation tool presented in this article. The tool was used to collect the data corresponding to the final evaluation of the process, which was a summative assessment.
Objectives
The review of the literature underscores the need for an assessment instrument that allows us to reliably and validly measure knowledge related to competences, specifically, to measure knowledge in samples of undergraduate students in primary education. Therefore, we set out the following specific objectives:
– Objective 1. To construct an instrument for the objective assessment of TDC knowledge in pre-service teachers.
– Objective 2. To establish a cut-off point and conduct a preliminary pilot study to lay the foundations for the subsequent external validation of the COMDID-C instrument.
METHOD
Sample
After the literature review and the COMDID-C question design processes were complete, the study was carried out in two phases: first the group of experts (sample 1) was contacted, and then a group of students (sample 2) participated in the pilot phase. The experts who participated in this study are researchers and teachers with experience in at least one of two areas: initial teacher training or TDC training. The minimum period of teaching experience among the expert sample was two years and the maximum was 30. Experience in TDC ranged from one to seven years. In total, six women and five men (hereinafter called the experts) from Rovira i Virgili University participated in the study. The youngest expert was 23 years old and the oldest was 56 (M = 39.4; SD = 12.34). They were all part of the COMDID-A assessment. Sample 2 in the pilot phase consisted of 25 students in the second year of the double degree in early childhood and primary education with a specialization in English at Rovira i Virgili University. The average age in this sample was 22.41 years (SD=2.71). There were, in total, three men (12%) and 22 women (88%) who participated in the validation of the COMDID-C CRT for the evaluation of digital competence.
Design and tool
A paper version of test was administered during the final session of the subject called Organization of the School Space, Materials and Teacher Skills. The test consisted of 88 questions divided between two parallel forms of the test that we sought to validate. These tests were divided into the four dimensions of TDC: D1. Didactic, curricular and methodological aspects; D2. Planning, organization and management of digital technological resources and spaces; D3. Relational aspects, ethics and security; D4. Personal and professional aspects. Half of the questions (44) correspond to “Test A” form (Table 2) of true or false (score 0 or 1) and the other 44 questions were included in “Test B” form (Table 3) of answers which range from the incorrect answer (to which we assign 0 points) to the completely correct one (1 point), all the questions are answered on a scale of five graduated options (0, 0.25, 0.5, 0.75 and 1). These two tests were designed in accordance with the self-assessment questionnaire COMDID-A (REFS), which has been validated and whose factorial structure is divided into the same four dimensions that this instrument evaluates (see Table 2).
Dimension 4. Personal and professional |
|||
Descriptor 4.1. Personal Learning Environment. |
|||
Type |
Question |
Answers |
Scoring |
test A example |
In a digital educational resources website, there are applications for all educational levels and all fields of knowledge. What is the best criterion to select the most appropriate? |
a. Take into account those resources that may favor the teaching-learning process. |
(1.00) |
b. Those resources that are more motivating for students. |
(0.00) |
||
c. If they have been published in the website, they are useful |
(0.00) |
||
d. Resources that favor the understanding of the contents more easily. |
(0.00) |
||
Dimension 4. Personal and professional |
|||
Descriptor 4.1. Personal Learning Environment. |
|||
Type |
Question |
Answers |
Scoring |
test B example |
During a School Council meeting, the parents association of the educational center raises its doubts about the convenience of the use of a virtual learning environment for teaching and learning. What argument would you use to justify its use? |
a. We live in a digital society and it is important to facilitate students tools like this. |
(0.75) |
b. The Department of Education of the Administration demands its use. |
(0.25) |
||
c. Its use improves the digital competence of students. |
(0.50) |
||
d. It is best not to use it if parents do not agree. |
(0.00) |
||
e. Its use helps to better manage the teaching-learning process. |
(1.00) |
||
After administering the two parallel forms of the test to a sample of students, they were asked about it and their proposals for improvement in terms of item formulation were taken into account, in accordance with the criteria of different experts who analyzed these comments in two work sessions. For example, in these sessions it was decided to eliminate the names of specific ICT tools to increase the validity of the content and the reliability of the COMDID over time. For instance, in the question linked to personal learning environments (PLE), the mention of the tool Symbaloo was deleted in an attempt to avoid making specific references, in this case, to a platform for compiling resources that allow one to configure a PLE.
Before we could analyze the reliability of the COMDID-C instrument, and because it is a CRT that measures the level of competence of the students, it was necessary to establish the cut-off point, in other words, the score above which the CRT could be considered passed and the subject competent in TDC. This point was determined by experts (judges), in the matter being assessed; however, the problem that arose was how to reconcile the different criteria used by each judge in finding a cut-off point. In this case, we used the Angoff method to address this issue.
The Angoff method (1971) is still the most used approach in practice, and it is particularly prevalent in education, and it has been adapted for use with different objectives (Cizek & Bunch, 2007). A group of assessors formulate their judgments based on a hypothetical group of students with minimal TDC. To every question on the test, the judges indicate the hypothetical students’ responses, and this yields a final value or minimum score, which is the cut-off point for the group to whom the test will ultimately be applied (Muñiz, 2010).
Data analysis
Measuring the cut-off point: In keeping with the Angoff method described above, the group of experts (class lecturers, team researchers and experts in TDC) determined the cut-off to be 70 percent for the specific group that omprises our sample. If the student of this group did not achieve the indicated score, they were classified as not competent. It is important to mention that this method can be applied both to the multiple-choice test (Test B), and to the dichotomous test (Test A). Therefore, we consider this result a good departure point for measuring the reliability of the two parallel COMDID-C forms.
The validity of the content of the COMDID-C items was addressed throughout the development process, as detailed throughout section 3.
For the reliability analysis, due to the design of the subject and in order be able to measure TDC, the session had to be held at the end of the course, which made it impossible to administer the test twice to this sample. This is why we administered the parallel forms of the COMDID-C, as mentioned in the previous section. With this data we measured Cohen’s kappa (Cohen, 1960), the most commonly used coefficient of internal consistency for this type of test (Landis & Koch, 1977; Shoukri, Asyali, & Donner, 2004). At the same time, with the objective of complementing those results, we also analyzed the construct validity or internal reliability with the Livingston coefficient (Livingston, 1972) for both of the parallel forms (Test A and Test B). We will discuss the results in the next section.
Stability: The stability of this test over time (test-retest stability or reliability) will be studied with future student groups (Livingston & Lewis, 1995).
RESULTS
The quantitative analysis of the data was performed with SPSS software version 21.0 for Windows.
To calculate both Cohen’s kappa and the Livingston coefficient with a single application, we estimated the competent and not competent students in the sample.
|
Cut-off point = 70 |
Test B |
|
||
|
|
|
Competents |
Not competents |
|
|
Test A |
Competents |
19 |
3 |
22 |
|
|
Not Competents |
0 |
3 |
3 |
|
|
|
19 |
6 |
25 (N) |
The calculation of Cohen’s kappa yielded the result k = 0.603 (p ≤ .005, n = 25 categorizations). In general, kappa coefficient values ranging between 0.6 and 0.8 are considered acceptable, while those over 0.8 are interpreted as very good (Landis & Koch, 1977). Therefore, our parallel versions of the test can be considered reliable.
The calculation of the Livingston coefficient with the data shown in Table 5 yielded the following results.
|
Cut-off point = 70 |
X |
Sx |
rxx |
|
Test A |
7,6 |
0,5 |
0,3 |
|
Test B |
7,4 |
0,7 |
0,32 |
For Test A, we obtained k2 = 0.472 (p ≤ .005, n = 25). For Test B, the Livingston coefficient is k2 = 0.723 (p ≤ .005, n = 25). Conceptually, k2 can be interpreted as the percentage of cases that would be classified in the same category if the same test were taken again. As with traditional reliability coefficients, values equal to or exceeding 0.70 can be considered acceptable. On these grounds, we can confirm that form B of the test is reliable according to the data, but that form A will require further study before its validity as a version of COMDID can be confirmed (see the following section for discussion).
DISCUSSION AND CONCLUSIONS
The objectives of this study were, first of all, to construct an instrument for the objective assessment of TDC knowledge in pre-service teachers. This instrument is called COMDID-C, because, as explained in the introduction, it is based on the COMDID rubric. The construction of this new tool has been approached as part of a comprehensive, mixed method (both qualitative and quantitative) process that considers all of the stakeholders for the study of TDC.
Competence involves putting into action conceptual knowledge, procedural knowledge and attitudes to be able to resolve a particular situation (OECD, 2011; Perrenoud, 2005). Thus, we are aware that competency assessment is a complex process and that it must be approached from a broad perspective and through the use of different assessment techniques and instruments (Pallisera, Fullana, Planas, & Valle, 2010). With this in mind, we have addressed the limitations of using a single assessment tool to measure capacities related to a complex competence like TDC. Before we can continue to advance the reliability process for this instrument, we believe a study with a larger sample must be conducted in order to collect evidence of the general behavior of the tool and to continue with the process of its development, studying in particular its external validity.
With regard to the second objective, establishing the cut-off point and carrying out a preliminary pilot study to establish the bases for the subsequent external validation of the COMDID-C instrument, the kappa value indicates that the two tests are parallel, in other words, that we can administer either one indistinguishably. In addition, the response time is long for those 88 questions, so we believe that we should apply 44 from Test A or B. Furthermore, we need a larger sample in order for the Livingston coefficient to yield better results and ensure validity, with a minimum of 44 students to increase the value of Cohen’s kappa and report greater reliability for the COMDID.
Specifically, and from the results of the psychometric analysis, it has become clear that, although reliable, version A of the COMDID (with dichotomous responses) should continue to be studied in larger samples. Within the constraints of this first quantitative study, and in keeping with Clark and Watson (1995), we can affirm that a multiple-choice tests offers better statistical and reliability results.
Similarly, we stress the need to design the assessment ensuring coherency between what is being evaluated and the procedure used for it (Villa & Poblete, 2011, p. 150). In this regard, we believe that the test we have used, administered at the end of the formative process for a summative purpose, offers reliable data for measuring the knowledge acquired by the students and the development of their TDC.
In the near future we suggest the test should be administered by means of a technological solution that allows the student to obtain immediate feedback and recommendations on how they can improve their skills. Thus, the evaluation would be more oriented towards learning, fostering the student’s capacity for self-regulation (Gil-Flores, 2012, p. 135).
FUNDING
Funded by: AGAUR Agència de Gestió d’Ajuts Universitaris i de Recerca [Agency for Management of University and Research Grants], Spain
Funder Identifier: http://dx.doi.org/10.13039/501100003030
Award: 2017ARMIF00031
Research funded by AGAUR. ADEDIM project: Avaluació i certificació de la competència digital docent en la formació inicial de mestres: una proposta de model per al sistema universitari català (Ref. 2017ARMIF00031).
REFERENCES
- Angoff, W. H. (1971). Scales norms and equivalent scores. En A R. L. Thorndike (Ed.), Educational measurement (2a. ed.). Washington: American Council on Education.
- Carless, D., Joughin, G., & Mok, M. M. C. (2006). Learning-oriented assessment: principles and practice. Assessment & Evaluation in Higher Education, 31(4), 395-398. doi:10.1080/02602930600679043
- Carless, D. (2007). Learning-oriented assessment: conceptual basis and practical implications. Innovations in Education and Teaching International, 44(1), 57-66. doi:10.1080/14703290601081332
- Clark L. A., & Watson D. (1995). Constructing validity: basic issues in objective scale development. Psychol Assessment, 7, 309-319. doi:10.1037/1040-3590.7.3.309
- Cizek, G. J., & Bunch, M. B. (2007). Standard setting. A guide to establishing and evaluating performance standards on tests. Thoundand Oaks: SAGE Publications, Inc. doi:10.4135/9781412985918
- Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20 (1), 37-46. doi:10.1177/001316446002000104
- Departament d’Ensenyament (2016). Resolució ENS/1356/2016, de 23 de maig, per la qual es dóna publicitat a la definició de la Competència digital docent. Barcelona: Diari Oficial de la Generalitat de Catalunya núm. 7133.
- Enlaces (2008). Estándares TIC para la Formación Inicial Docente: Una propuesta en el contexto Chileno. Ministerio de Educación, Gobierno de Chile.
- Enlaces (2011). Competencias y estándares TIC para la profesión docente. Centro de Educación y Tecnología (Enlaces). Ministerio de Educación, Gobierno de Chile.
- Esteve, F. (2015). La competencia digital docente. Análisis de la autopercepción y evaluación del desempeño de los estudiantes universitarios de educación por medio de un entorno 3D. Rovira i Virgili. Retrieved from http://www.tdx.cat/handle/10803/291441
- European Commission (2007). Key competencies for lifelong learning: European Reference Framework, Office for Official Publications of the European Communities, Luxembourg. Retrieved from https://www.erasmusplus.org.uk/file/272/download
- European Commission (2018). Proposal for a council recommendation on key competences for lifelong learning. Retrieved from https://ec.europa.eu/education/sites/education/files/annex-recommendation-key-competences-lifelong-learning.pdf
- European Union (2009). Council conclusions of 12 May 2009 on a strategic framework for European cooperation in education and training (ET 2020). Retrieved from https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:52009XG0528(01)
- Fraser, J., Atkins, L., & Richard, H. (2013). DigiLit leicester. Supporting teachers, promoting digital literacy, transforming learning. Leicester City Council.
- Generalitat de Catalunya (2016) ENS/1356/2016, de 23 de maig, per la qual es dóna publicitat a la definició de la Competència digital docent, DOGC Núm. 7133. Retrieved from http://dogc.gencat.cat/ca/pdogc_canals_interns/pdogc_resultats_fitxa/?action=fitxa&documentId=730633&language=ca_ES
- Generalitat de Catalunya (2018). Competència digital docent del professorat de Catalunya. Barcelona: Departament d’Ensenyament. Retrieved from http://ensenyament.gencat.cat/ca/departament/publicacions/monografies/competencia-digital-docent/
- Gil-Flores, J. (2012). La evaluación del aprendizaje en la universidad según la experiencia de los estudiantes. Estudios sobre Educación, 22, 133-153.
- Gisbert-Cervera, M., & Lázaro-Cantabrana, J. L. (2015). Professional development in teacher digital competence and improving school quality from the teachers’ perspective: a case study. Journal of New Approaches in Educational Research, 4(2), 115-122. doi:10.7821/naer.2015.7.123
- Gutiérrez, I., & Serrano, J. (2016). Evaluation and development of digital competence in future primary school teachers at the University of Murcia. Journal of New Approaches in Educational Research, 5(1), 51-56. doi:10.7821/naer.2016.1.152
- Ibarra, M. S., & Rodríguez, G. (2010). An approach to the dominant discourse of learning assessment in higher education. Revista de Educación, 351, 385-407.
- INTEF (2014). Marco Común de Competencia Digital Docente. Instituto Nacional de Tecnologías Educativas y de Formación del Profesorado (INTEF). Retrieved from http://educalab.es/documents/10180/12809/MarcoComunCompeDigiDoceV2.pdf/e8766a69-d9ba-43f2-afe9-f526f0b34859
- INTEF (2017). Marco Común de Competencia Digital Docente. Retrieved from http://educalab.es/documents/10180/12809/MarcoComunCompeDigiDoceV2.pdf
- ISTE (2008). National educational technology standards for teachers. Washington DC: International Society for Technology in Education.
- Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159-74. doi:10.2307/2529310
- Lázaro, J. L. (2015). La competència digital docent com a eina per garantir la qualitat en l’ús de les tic en un centre escolar. Universitat Rovira i Virgili. Retrieved from http://www.tdx.cat/handle/10803/312831
- Lázaro, J. L., & Gisbert, M. (2015). Elaboració d’una rúbrica per avaluar la competència digital del docent. Universitas Tarraconensis. Revista de Ciències de l’Educació, 1(1), 48–63. doi:10.17345/ute.2015.1.648
- Lázaro, J. L., Esteve, V., Gisbert, M., & Sanromà, M. (2016). Diseño y validación de actividades en un entorno de simulación 3D para el desarrollo de la competencia digital docente en los estudiantes del grado de educación. En R. Roig (Ed.), Tecnología, innovación e investigación en los procesos de enseñanza-aprendizaje (pp. 2606-2615). Barcelona: Octaedro.
- Lázaro, J. L., & Gisbert, M. (2015). Elaboració d’una rúbrica per avaluar la competència digital del docent. Universitas Tarraconensis. Revista de Ciències de l’Educació, 1(1), 48–63. doi:10.17345/ute.2015.1.648
- Lázaro, J. L., Gisbert, M., & Silva, J. E. (2018). Una rúbrica para evaluar la competencia digital del profesor universitario en el contexto latinoamericano. Edutec, Revista Electrónica de Tecnología Educativa, 0(63), 1-14. doi:10.21556/edutec.2018.63.1091
- Livingston, S. A., & Lewis, C. (1995). Estimating the consistency and accuracy of classifications based on test scores. Journal of Educational Measurement, 32(2), 179-197. doi:10.1111/j.1745-3984.1995.tb00462.x
- Martos-Garcia, D., Usabiaga, O., & Valencia-Peris, A. (2017). Students’ perception on formative and shared assessment: Connecting two universities through the Blogosphere. Journal of New Approaches in Educational Research, 6(1), 64-70. doi:10.7821/naer.2017.1.194
- Ministerio de Educación Nacional (2013). Competencias TIC para el desarrollo profesional docente. Retrieved from http://goo.gl/WbqS9L
- Muñiz, J. (2010). Teoría clásica de los test. Madrid: Editorial Pirámide.
- OCDE. (2011). Informe habilidades y competencias del siglo XXI para los aprendices del nuevo milenio en los países de la OCDE.
- Pallisera, M., Fullana Noell, J., Planas Lladó, A., & Valle Gómez, A. D. (2010). La adaptación al Espacio Europeo de Educación Superior en España: los cambios/retos que implica la enseñanza basada en competencias y orientaciones para responder a ellos. Revista Iberoamericana de Educación, 52(4), 1-13.
- Perrenoud, P. (2005). La universitat entre la transmissió de coneixements i el desenvolupament de competències. En J. Carreras, & P. Perrenoud (Eds.), El debat sobre les competències en l’ensenyament universitari. Barcelona: Universitat de Barcelona.
- Redecker, C., & Punie, Y. (2017). European framework for the digital competence of educators: DigCompEdu. In Ch. Redecker, & Y Punie (Ed.), European framework for the digital competence of educators. Luxembourg: Publications Office of the European Union.
- Sancho, J., & Padilla, P. (2016). Promoting digital competence in secondary education: are schools there? Insights from a case study. Journal of New Approaches in Educational Research, 5(1), 57-63. doi:10.7821/naer.2016.1.157
- Shoukri, M. M., Asyali, M. H., & Donner, A. (2004) Sample size requirements for the design of reliability study: Review and results. Statistical Methods in Medical Research, 13, 251-271. doi:10.1191/0962280204sm365ra
- Silva, J., Miranda, P., Gisbert, M., Morales, J., & Onetto, A. (2016). Indicadores para evaluar la competencia digital docente en la formación inicial en el contexto Chileno – Uruguayo / Indicators to assess digital competence of teachers in initial training in the Chile - Uruguay contex. Revista Latinoamericana De Tecnología Educativa - RELATEC, 15(3), 55-67.
- UNESCO. (2008). Estándares de competencia en TIC para docentes. Retrieved from http://www.eduteka.org/EstandaresDocentesUNESCO.php
- Villa, A., & Poblete, M. (2011). Evaluación de competencias genéricas: principios, oportunidades y limitaciones. Bordón. Revista de Pedagogía, 63(1), 147-170.


The texts published in this journal, unless otherwise indicated, are subject to a Creative Commons