# Cross Encoder ![](Introduction%20-%20Recent%20Developments%20in%20Neural%20Search%209-8%20screenshot.png) In a cross encoder we no longer pass the query and document independently. Rather, we concatenate them as one long input to a transformer network. We then do cross attention between all the words in the query and all the words in the document, and at the end we get some score between 0 and 1 of how relevant is the document for the given query. We can see that the cross encoder significantly improves performance compared to a [Dual Encoder](Dual%20Encoder.md). However, Cross Encoders are slow. ![](Introduction%20-%20Recent%20Developments%20in%20Neural%20Search%2010-3%20screenshot.png) --- Date: 20230607 Links to: Tags: References: * []()