Our visual world is both extremely complex and extremely structured. Our ability to capitalize on existing regularities is key for successful exploration and exploitation of our surroundings. So-called positional regularities can be particularly useful for detecting objects of interest within rich visual environments: certain objects are much more likely to appear in certain parts of the visual field. Some of these positional regularities are consistent throughout our lifetimes (e.g., airplanes appear more often in the upper visual field, whereas shoes appear more often in the lower visual field). Recent evidence has shown that our visual system favors prediction-consistent visual input, so that objects gain prioritized access to awareness when they are presented at a typical location (Kaiser, & Cichy, 2018). In this study, we investigate whether human observers can flexibly overturn these over-learned predictions in the face of novel, conflicting, evidence. To do so, we manipulate long-term positional regularities (objects appearing at typical or atypical visual field locations) and short-term positional regularities (objects appearing with high or low probability at different field locations), and measure the time it takes for observers to detect these objects. In doing so, we aim to uncover the mechanisms underlying position-specific biases in object perception.